Can Better Controls Lead to Better Trust in AI?

    Dr Sam Genway

    Topics:

    AI

    Whether it’s built for forecasting or automation, AI can do it better than humans. Can the right controls help users get over their distrust and embrace AI’s potential?


    The question of user trust in AI runs deep. The activity the AI is involved in, the entity using the AI, and user expectations all play roles in the question of trust in human to AI interactions. In this article, we’re going to focus on types of AI control (whether built-in or built around an application) and how they can help improve end user trust in AI.

    Build risk controls into AI model development processes

    In the broadest terms, what you’re trying to do with an AI will determine how much control you need to satisfy user trust. But to be more specific, consider a risk-based framework.

    In a risk-based framework, you identify risks and establish controls to mitigate those risks. The level of control you need is tied to the severity of the risk. So AlphaGo requires a lower level of control than an autonomous car because the severity of risk doesn’t involve human lives.

    READ HOW TO BUILD AND DEPLOY TRUSTED AI IN OUR NEW WHITE PAPER

    Traditionally, the approach to controls is to manage them after development is complete. You get the model to work and then think about how to control it.

    But by considering controls from the outset, you can ensure that your AI model is consistent with the users’ values. Tools such as model interpretability, bias detection, and performance monitoring help maintain oversight throughout development.

    In this approach, standards, testing, and controls are embedded into various stages of the analytics model’s life cycle, from development to deployment and use. Let’s take the risk of bias as an example from a recent McKinsey article:

    Can better controls lead to better trust in AI?

    By identifying risks and embedding controls into the AI modelling process, you can create a model that better reflects the values of the user. When a user can see that a person, institution, or product shares their values, they're more likely to trust it.

    But AI controls shouldn't be static. You need to enable agile change management to keep models in line with everchanging end-user values.

    Deploying controls independently of AI

    Complexity has an impact on trust. People don’t trust what they can’t understand. But putting a limit on an AI’s design complexity undermines its potential. One of the ways to resolve this problem is to create simple systems to monitor AI. These systems can set critical guardrails so that no matter what the AI does, it can’t go past a specific point. This means that an AI for an autonomous car could have a system that checks its top speed. Regardless of what that the AI thinks is right, it cannot accelerate over 70mph without some kind of human check.

    We trust things that we understand. If the guardrails are simple and reflect human value systems, you create a safe environment for a complex AI to work.

    Create sliding scales of user control

    When it comes to AI implementation, you can manage user control to develop trust. Get people used to the AI and integrate it gradually into an organisation.

    How can you do this? Think about your AI like any new employee.

    Within an organisation, you typically see some sort of onboarding process. New staff don’t just run amok, they’re managed closely with a clear hierarchy and given specific tasks to do. AIs need the same oversight. As they prove themselves trustworthy, you step back and give them more freedom.

    • Allow AI to be used in parallel with existing processes

    At the first step of implementation, don’t replace a process wholesale. Let it run in parallel with a manual process and let users compare the old and new processes. This allows users to overcome any personal biases in their way, creating a more natural relationship.

    • Human in the loop

    Human-in-the-loop describes the process when an AI has an element of human intervention to spot-check outputs and create a continuous feedback loop for improvement. As part of this process, humans label the data which is fed into the algorithms to make various scenarios understandable to machines.

    Later humans also check and evaluate the results or predictions for ML model validation. If results are inaccurate, humans tune the AI to make the right predictions.

    • Human on the loop

    As trust grows, users can give up more control. In a human on the loop process, you don’t need a human every time an AI makes a decision. Monitoring takes place regularly and the human can intervene when necessary.

    By creating an AI implementation strategy that is sensitive to a user’s capacity for change, you can secure greater success and adoption than rolling it out en masse.

    Trust is less tangible than performance

    Creating an AI that works is not enough to get people to trust it. Ultimately, trust lies in linking an association with a source you have confidence in. There is so much that we take on faith without understanding how it works because we’ve grown comfortable with it and people we trust say it works.

    However, building risk controls into development processes and giving users the agency to control their relationships with AI are two steps we can take to improving user’s perceptions of AI.

    Boy with robot