How is Trust in AI Undermined and What Can We Do About It?

    Dr Matt Jones

    Topics:

    AI

    Companies are starting to get the hang of building AI. But building a good model isn't the same as building an AI that works on real world data and is trusted by users. We need to move our focus beyond good model accuracy, to AI designed for real world use.

    There are various stages where trust in AI development and deployment can be undermined. We'll look at some of them here, and propose an approach to building AIs which eliminates these risks.

    READ HOW TO BUILD AND DEPLOY TRUSTED AI IN OUR NEW WHITE PAPER

    How AI design and deployment can undermine trust

    • Bias in training data: AIs learn to reflect bias in their training, which reflects bias in the real world. Unconscious gender or racial bias has often hit the headlines, created by using AI to automate processes without understanding the data’s limits. Prejudice is the nasty face of this, but bias can extend to misplaced assumptions by scientists, doctors recording incorrect diagnoses, and writing style.
    • Badly curated data: Data can also be mislabelled or poorly curated, so the AI struggles to make sense of it. If data is not appropriately selected, then the model will not learn how to reach the right conclusions. And if conclusions seem suspect, people won’t trust it.
    • User interface and explainability: Trust is undermined when the AI is complex or frustrating to interact with. If the user doesn't feel they can input the information they want, they'll be suspicious of the result. If the interface is overly complex or the results are presented in a confusing way - or with no explanation as to how they were reached - it will be abandoned.
    • Bias in the real world: Many AIs continue to learn post deployment, but aren't prepared for the complexities of real-world data. Famously, Microsoft’s Tay, an artificially intelligent chatbot, was designed to learn from interactions it had with real people on Twitter. Within 24 hours Tay was withdrawn for spreading deeply upsetting opinions.
    • Lack of transparency: Sitting above all these issues is a fear fed by AI's lack of transparency. Not only do the end-users not understand how AIs make their decisions, in many cases nor do their makers.

    A framework for building and deploying trusted AI

    Despite these risks, AI delivers huge value when done well, and it's often done very well.

    AI must be designed as a whole product, with a set of support services around it, that allow the user to trust its outputs and doing so needs a rigorous approach to AI development. Here are our five key parameters for creating trusted AI.

    1. Assured: Trusted AIs must use a well-designed model, and be trained and tested on data that is proven to be accurate, complete, from trusted sources, and free from bias. Capturing that data requires rigorous processes around data collection and curation.
    2. Explainable: A recommendation is much more useful if you understand how and why it was made. A good AI will have tools to analyse what data was used, its provenance, and how the model weighted different inputs, then report on that conclusion in clear language appropriate to the user's expertise.
    3. Human: A trusted AI is intuitive to use. An intuitive interface, consistently good recommendations, and easy-to-understand decisions all help the user come to trust it over time.
    4. Legal and ethical: A trusted AI should reach decisions that are fair and impartial, with privacy and ethical concerns given equal weight to accuracy.
    5. Performant: A trusted AI continues to work after deployment. Too many AIs work well in a controlled environment but fall over once deployed. Users quickly lose trust in an AI they see making less and less reliable decisions. A performant AI is future-proofed for throughput, accuracy, robustness, and security, balancing raw predictive power with transparent interpretation, whilst remaining aligned to genuine business need.

    Examples of failings in the technology of the past have led to trust in AI being undermined. Merely meeting this criteria doesn't guarantee acceptance of the technology, but it's the first step in helping users trust AI.

    Boy with robot