“Explainable AI” - Maintaining human control in a world of increasingly intelligent machines
Published:
Content Copyright © 2020 Bloor. All Rights Reserved.
Also posted on: Bloor blogs
It is usually accepted that machine-augmented intelligence is part of the Future of Work story in a mutable business – a.k.a “The Robots are Taking Over”. But, note that I am talking about “machine-augmented” intelligence, not “artificial intelligence” here, although the two are often confused. True artificial intelligence would require a sentient machine, one that decides to pick up a problem and pursue it, of its own volition. We are a long way from that and, as well as eliminating mundane jobs, MAI is generating new jobs for people building intelligence into FoW systems and validating or explaining their outcomes.
Where we are, is the use of machines (that is, Machine-Augmented Intelligence, MAI; and Machine Learning, ML) to automate the mundane parts of automating business processes, under human direction. Humans can set up algorithms that can evolve, to deliver ever-closer approaches to a desired result; and machines can recognise events that have occurred before and suggest adopting approaches that, according to feedback from their human masters, worked last time.
Nevertheless, although such systems can, for example, identify cancers from images earlier and more reliably than doctors can (see here) it really isn’t quite as simple as that. Nobody is going to start major surgery, for instance, without confirming the machine’s diagnosis.
The same will, or should, apply to the use of MAI in business systems. It is not simply a matter of trusting the machine to make a decision and not questioning it. These systems will have to be designed so as to let them explain, in human terms, how they came up with their solutions – or, at least, we will have to repeat the analysis for critical MAI-based decisions by conventional methods, so as to “prove” it works. This is more work, but we will be saving time otherwise wasted on cases that aren’t relevant.
MAI is fast becoming a desirable feature of modern systems, as Craig Roxby (Managing Director of Magnifi, a complete forecasting and reporting solution for Business Advisors) highlights: “We have embedded Zoho Analytics within our internally developed financial services software. Whenever I have a new client meeting, I demonstrate the Ask Zia feature of Zoho Analytics with a question like “what was my income last month?” It’s quick reply or a visual fascinates my clients and they say, ‘This is what we want’”. But, this isn’t going to work in the longer term unless users come to really trust what Ask Zia, Zoho’s machine-augmented intelligence tool, tells them; and it is significant that, when I met it in Texas Zoho made great play of the ability of its MAI tools to explain their decisions.
Consider these use cases where “the computer says NO” (or “YES”) simply won’t cut it:
- Safety-critical or medical decisions;
- Privacy-related decisions – the “advanced AI analysis model” used to drive customer behaviour will have to show that its conclusions weren’t reached by using personal information without consent;
- Regulated financial systems, when you might have to show that the best available advice was given;
- Decisions supporting user experiences or UX; the computer says “NO” is clearly not part of a user-friendly UX resulting in happy customers;
In fact, it is hard to think of any decisions that won’t sometimes be questioned, and where an acceptable response to such enquiries would include “I really don’t know, but the black box in the corner there never makes a mistake”.
“Explainable AI” is really just part of facilitating the trust that the stakeholders in systems using machine intelligence must have in the business systems they support. According to Google: “Explainable AI is a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models” – in other words, it bridges the gap between machine intelligence and human intelligence.
Another aspect of “explainable AI” is automating the validation of automated systems – see my blog here. Systems are getting ever more complex as they attempt to asynchronously automate large scale interactions between largely independent components. If someone abandons an interaction part way through, then intermediate results which are now no longer relevant may have been acted on by other players – more code may be needed to remediate or correct processes begun in error than is needed to execute routine transactions without any problems. To validate the behaviours of such complex systems, which may well be at the heart of FoW automation and may well be subject to regulation, will require MAI and ML processes, I believe – but not only will business automation use MAI and ML technology, but testing these complex systems will use MAI and ML – and will have to explain, in human terms, what is wrong and where, so it can be fixed.
Machine augmentation of previously human processes may well automate mundane and routine business processes and eliminate the need for many human operators; but it will also create (probably fewer) new jobs (but needing different skills) for human workers, building the systems exploiting “explainable augmented intelligence” and validating their behaviours.
This post is part of our Future of Work series. You can read the previous post or find them all in our Future of Work section. If you’d like to discuss how we can help get you prepared for the way work and business is changing, then please contact us.