When and How Should AIs Explain Their Decisions?

Sam Genway

Topics:

AI

As artificial intelligence (AI) increasingly makes decisions, there are growing concerns around AI decision-making, and how it reaches its answers.

AIs can be complex. Unlike traditional algorithms, AIs do not follow a set of pre-defined rules. Instead, they learn to recognise patterns – such as when a component of a machine will fail or whether a transaction is fraudulent – by building their own rules from training data. Once an AI model is shown to give the right answers, it is set loose in the real world – e.g. on live machine operating data or financial transactions.

However, getting the right answer does not necessarily mean it reached it right way. To take a simple example, an AI model was successfully trained to recognise the difference between wolves and huskies. However, it later transpired that the AI learned to tell the difference based on whether there was snow in the background.

This will work most the time, but as soon as it needs to spot a husky out of its natural habitat, it will fail. If we rely on AI (or indeed humans) being right for wrong reasons, it limits where they can work effectively.

Pull Away from the Pack with Our Guide to Maximising AI’s Value

Explainable artificial intelligence 

We may instinctively feel that any machine decision must be understandable, but that’s not necessarily the case. We must distinguish between trust (whether we are confident that our AI gets the right answer) and explainability (how it reached that answer).

We always need to have a level of trust demonstrated when using an AI system, but only sometimes do we need to understand how it got there.

Take an AI that decides whether a machine needs maintenance to avoid a failure. If we can show that the AI is consistently right, we don’t even need to know what features in the data it used to reach that decision. Of course, not all decisions will be correct, and that holds whether it’s a human or a machine making the decision. If an AI gets 80% of calls on machine maintenance right, compared to 60% for human judgement, then it’s likely a benefit worth having, even if the decision-making isn’t perfect, or fully-understood.

On the other hand, there are many situations where we do need know how the decision was made.

There may be legal or business requirements to explain why a decision was taken, such as why a loan was rejected. Banks need to be able to see what specific features in their data, or which combination of features, led to the decision.

How do we know when AI decision-making is right?

In other cases, it is important to know why the decision is the right one; we wouldn’t want a cancer diagnosis tool to have the same flawed reasoning as the husky AI. Medicine in particular presents ethical grey areas. Let’s imagine an AI model is shown to recommend the right life-saving medical treatment more often than doctors do. Should we go with the AI even if we don’t understand how it reached the decision? Right now, completely automating decisions like this is considered a step too far.

And explainability is not just about how AIs reach the right answer. There may be times when we know an AI is wrong, for example if it develops a bias against women, without knowing why. Explaining how the AI system has exploited inherent biases in the data could give us the understanding we need to improve the model and remove the bias, rather than throwing the whole thing out. Even in cases where we don’t need to understand the decision process, being able to do so can help us understand the space of a problem and the AI model, such that we can create more effective and robust solutions.

How to make AI explainable

As with anything in AI, there are few easy answers, but asking how explainable you need your AI to be is a good starting point.

If complete model transparency is vital, then a white box (as opposed to a black box) approach is important. Transparent models which follow simple sets of rules allow us to explain which factors were used to make any decision, and how they were used.

But there are trade-offs. Limiting AI to simple rules also limits complexity, which limits ability to solve complex problems, such as beating world champions at complex games. Where complexity brings greater accuracy, there is a balance to be struck between the best possible result and understanding that result.

A compromise may be the ability to get some understanding of particular decisions, without needing to understand how the AI model functions in its entirety. For example, users of an AI which classifies animals in a zoo may want to drill down into how a tiger is classified. This can tell them the information that it uses to say what is a tiger (perhaps the stripes, face, etc.), but not how it classifies other animals, or how it works generally. This allows you to use a complex AI model, but focus down into local models that drive specific outputs where needed.

Who should AI be explainable to?

There is also the question of ‘explainable to whom?’ Explanations about an animal classifier can be understood by anyone: most people could appreciate that if a husky is being classified as a husky because there is snow in background, the AI is right for the wrong reasons. But an AI which classifies, say, cancerous tissue would need to be assessed by an expert pathologist. For many AI challenges, such as automating human processes, there will be human experts who can help qualify the explanations.

However, as AI turns to increasingly challenging problems further from human experience, the utility of explanations will surely come into question. Physicist Richard Feynman was once asked to explain why two magnets repel or attract each other and replied, “I can't explain that attraction in terms of anything else that's familiar to you.” Should we expect an easy explanation when AI solves a complex and unfamiliar problem?

What level of explainability is right for you?

As AI expands into every area of our lives, there is growing concern around how explainable its decisions are.

In the early days of mainstream AI, many were satisfied with a black box which gave answers. As AI is used more and more for applications where decisions need to be explainable, the ability to look under the hood of the AI and understand how those decisions are reached will become more important. This needs to be considered from the start as it will inform the design of an AI system.

There is no single definition of explainability: it can be provided at many different levels depending on need and problem complexity. Organisations need to consider issues such as ethics, regulations and customer demand alongside the need for optimisation – in relation to the business problem they are trying to solve – before deciding whether and how their AI decisions should be explainable. Only then they can make informed decisions about the role of explainability when developing their AI systems. If you'd like to learn more, read our guide to maximising the business impact of AI during digital transformation. 

Pull Away from the Pack with Our Guide to Maximising AI’s Value