How Trust in AI Affects Success as Much as Accuracy

    Dr Matt Jones

    Topics:

    AI

    AI success is about trust and accuracy.

    Many data scientists are focused on making their models as accurate as possible. The entirely reasonable belief is that the more accurate it is, and the higher its success rate in getting the right answers, the more people will trust AI.

    After all, if we are told an AI gets things right 10% more than a human, we should trust the AI, right? Ask yourself this: would you get on the maiden flight of a plane flown only by AI? What if you were told it had been shown in trials to be 10% safer than human pilots?

    Most people wouldn't. Most would want - at the very least - to see that it works many times, including under highly challenging conditions, before they would put their life in its hands.

    If the initial tests are right and it is indeed 10% safer than human pilots, then there's no actual difference between the first flight and the millionth. But by the millionth it has built trust. More people will happily use it. The technical success rate is the same, but the value to users is significantly different.

    READ HOW TO BUILD AND DEPLOY TRUSTED AI IN OUR NEW WHITE PAPER

    AI is a technical concept. Trust is a human one.

    Whether we trust something comes down to how we answer a wide range of questions, both technical and emotional.

    Is the claimed accuracy good enough? Do we believe the claimed accuracy? Are we sure the input data was interpreted correctly? Are we sure the test data used to validate the model was properly selected? If accuracy is not 100%, in what cases should we be wary? What should we do about that instinctive feeling that something is wrong, that we can’t quite put our finger on?

    These are complex questions. Trust is certainly undermined by low accuracy, but high accuracy alone does not guarantee trust. An AI can be 99% accurate, but so complex and confusing – or new – that no one trusts it. Lots of companies have implemented well-designed AI models, but users still have issues trusting the results.

    To build trust, we need comprehensible evidence it works, and explainability of how and why it works, before we can trust it. Even then, trust may take time to earn, and all the more so when the stakes of trusting its output are high.

    AIs must be designed and planned so that people can understand and learn to trust them. That means good design approaches, but also managing expectations around what AI can and can’t do, and rolling out a complex technology in a way that aligns with how people learn. Even after that, the question remains - will users trust your AI?

    Boy with robot