Trust Before Intelligence – The New Role of Explainable AI (XAI)

Thomas Titcombe

Topics:

AI

As AI development has permeated through the academic bubble into all aspects of life, there has been a recent trend towards considering the implications, not just the applications, of artificial intelligence. Far from being an academic exercise, the ethics of AI can be beneficial to practitioners: the Microsoft report on AI, Maximising the AI Opportunity suggests that companies which implement an ethical framework for the use of their AI are outperforming those who do not.

Critical to developing fair, safe systems in which we can place our trust is explainable AI (XAI), the principle of designing systems whose decisions can be understood and interpreted. The core promise of XAI is to lift the lid on the so-called “black box” of machine learning – systems whose internal workings are opaque and unfriendly to human eyes. This is an actively developing research area, but the current lack of robust, universal techniques will not assuage the concerns of those who are looking to begin implementing AI in their operations yet are unsure about the reliability of black-box AI.

However, through a considered approach to data, users and the system, the creation of trustworthy automated systems is not an insurmountable challenge. The black box should not be feared.

Pull Away from the Pack with Our Guide to Maximising AI’s Value

Understanding XAI and its potential impact

The current AI boom has been driven by the neural network, a class of model which finely tunes millions of parameters by training on large datasets. The move away from rule-based solutions, which are written explicitly by humans and therefore highly interpretable, and from simpler statistical models which contain fewer parameters or have closed-form solutions, such as linear regression, has brought solutions to increasingly complex problems at the cost of interpretability.

Whilst their performance and success are undeniable, there is hope that developing methods of conceptualising the decision making of neural networks can increase the benefit to both practitioners and users:

  • Without careful data management, models can learn spurious correlations in data. Using tools capable of visualising the parts of the data which contribute to a result could help to spot problems in training, saving development time and creating more performant models.
  • Adversarial attacks are a technique which looks to confuse a model by subtly changing the input data. Recently, researchers used this technique to trick a self-driving car into changing lanes through nothing more than strategically placed stickers. Idealised explainable systems may help us to understand how decisions are being made and therefore better defend against malicious attacks.
  • Fostering trust between the public and autonomous systems is crucial to ensuring they are widely adopted. Having processes in place to query an automated decision, as we can a human, can facilitate this.

The state of XAI and its future

Explainability has been highlighted as an area of crucial importance by several research groups, such as DARPA, Google, IBM and Harvard

. Current processes can highlight the aspects of data contributing most to a decision for well-structured problems like image classification. Expanding these tools for a broader suite of issues or use by non-experts is a short-term objective.

How we approach explainable systems is not solely a question of technology, but also of policy. Is it enough to know which parts of the data contribute to a decision, or must we also know why they contribute, what links the model has seen between X and Y, and what its strengths and limits are? At the core of these challenges lie the many unanswered questions about how exactly neural networks learn or why they work as well as they do. Clearly, there is much we still must uncover before XAI can be an industry norm.

It does not necessarily follow, however, that systems which lack full explainability are inherently untrustworthy. On the contrary, we place our trust in humans based on a myriad of factors, some of which we feel innately, not consciously.

Furthermore, the ability to rationalise a decision does not validate it: a recruiter can explain that they are not hiring an applicant due to his or her interview performance, but that may not stop the applicant feeling like race, age, or gender strongly influenced the decision.

The level of trust we need to place in a system, and therefore how explainable it must be, is also fluid. For example, defence systems need to be very sure whether they are detecting a military or civilian aircraft, whereas an automated stockroom would only need to be quite sure about how popular a product will be the next day. AI applied to a relatively benign task is easier to trust than one which manages human lives, because the impact of misplaced trust is far less.

Of course, explainability is not universally accepted as a requirement: Geoffrey Hinton, Turing award winner and godfather of deep learning, has suggested “…You should regulate [AI systems] based on how they perform. You run the experiments to see if the thing’s biased, or if it is likely to kill fewer people than a person”.

A purely retrospective assessment like this is unlikely to engender widespread support when applied to critical, high-risk systems. However, the end purpose of explainability is to implement safe and fair AI, so systems with lower trust thresholds can be validated through posthoc analysis, as Hinton suggests. For non-critical systems, knowing that it works is far more critical than knowing how it works.

Even with an explainable system, trust and fairness are not guaranteed. The history of AI application is abound with high-profile cases of mal-trained models put into production through human, not technological, failure. In many of these cases, an explainable decision would be no consolation to those who have been maligned by it; problems in the scope and utility of a model must be addressed during the design of the system.

Careful, considered data management must be at the core of any operation to avoid such issues. When human data is involved, great care must be taken not only to represent the users in the data but to not import human bias present in the data. It is essential to understand that neural networks are not magic boxes – the quality of data fed in will be paid tenfold to the quality of its predictions.

XAI – Ethics and law

As with industry, the public sector is still finding its feet on AI policy. The need to encourage active development and adoption of autonomous systems must be balanced with assurances of safe implementation, as with any other technology.

The UK Parliament Select Committee on Artificial Intelligence recently produced a comprehensive report on national AI policy. On explainability, it states “…it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life unless it can generate a full and satisfactory explanation for the decisions it will take”.

Similarly, France’s AI for Humanity report outlines explainability as a critical research goal and proposes a group of experts be tasked with analysing algorithms to audit their operation. Importantly, this report also outlines the need to develop tools and processes to facilitate XAI; this suggests that there is an understanding that regulation must be developed in parallel with technology, without which small actors could be regulated out of AI if they cannot afford to develop their own explainable and ethical practices.

Whilst the proposals are currently quite amorphous; they signal an intent to place explainability at the heart of future AI regulation.

What business must understand about XAI

Developing AI which can be trusted does not require state-of-the-art techniques in explainability, nor a complete and thorough ethics framework. However, to imbue trust and fairness, key questions must be asked of a system, its process and its users:

  • How are decisions currently being made? Many experts rely on gut feeling, developed over years of working intimately in their field; a system learning directly from data curated by experts can learn that intuition. For other processes, keeping a human in the loop to build on automated decisions can yield great benefits without compromising its validity or robustness.
  • How will the decision be implemented? The hype surrounding AI can lead to a desire for an implementation over a solution. Some tasks do not require large, complex neural networks to be solved. The most straightforward solutions should always be attempted first.
  • What is the impact of the decision? The level of trust needed in a system depends on its application. For the more benign use-cases, this means less emphasis is required on explainability and robustness. At the heart of development should be the real human impact of the model, making an incorrect decision.
  • How is it tested? The common workflow for developing AI systems involves testing a model on an unseen set of data and comparing the results to a baseline. Testing should not stop there. If the system has human users, great care should be taken to understand them. Can everyone use the system as it is intended? Are sensitive factors such as age, sex, or race unintentionally influencing decisions? Finally, be wary of “too good to be true” results: statistical models will not be perfect.

There is an estimated $15T opportunity in applying AI solutions, which can and should be seized by all industries. Whilst global policy on AI grows from its embryonic state; it is necessary for trust and fairness to be at the centre of development. This can be achieved now. The black box should not be feared.