COVID-19 Models 3: How to Ensure Your Answers Are Trusted

    David Hughes

    Topics:

    COVID-19 Series

    The importance of trust

    As you design and build your model, you need to be continuously asking an all-important question: will the intended users trust it?

    A trusted model is one that people are happy and able to use with confidence. It's a model that gives results that users understand and accept, that's easy to use, and that doesn't raise undue privacy, legal, or ethical issues.

    Accuracy feeds into trust, but trust is a much broader concept. Many excellent models come undone because users don't feel comfortable using it.

    Maximise the capability of your experts to fight COVID-19 and prepare for a  post-crisis future. 

    Trust and COVID-19

    Trust is always critical, but COVID-19 throws up new issues and shifts nuance. People are desperate for answers and may lower their initial threshold for trust, downloading tracing apps or leaping on promising trial results. This is dangerous. If that trust is subsequently shown to be misplaced, it can derail a project part way through and undermine a potentially valuable solution. It's better to bake in trust from the start.

    Trust can be undermined in all sorts of ways. Track and trace apps provide an obvious illustration of why trust matters where mass public acceptance is vital, but trust is just as important for tools designed for expert users, such as diagnostic or drug discovery platforms. 

    The most obvious trust issue comes when model output is wrong or unreliable, or not honest about its limitations. The world is watching the companies developing new therapeutic and diagnostic tools. If data or models are not properly validated and checked before their results are released, they may mislead the public, potential buyers, and shareholders, and so undermine trust in you, your capability, and your business in the longer term.

    Explainability is a related problem. If people get alerts that they might be infected by COVID-19, they want to know how that decision was reached. If they can clearly see they spent an hour talking to an infected person, they are likely to take the result seriously and isolate. If they get an alert with no context, they may assume that it’s based on a fleeting encounter with a stranger, an error, or ‘just being overly cautious’ and decide it’s easier to ignore it.

    Over-complicated or frustrating user interfaces also undermine trust. Reports that the UK contact tracing app will drain battery life and not function as well on older phones and devices will turn the public away, even if the system infrastructure and data analytics is secure and of a high level of accuracy.

    Control your data. Save lives.

    Privacy concerns also lead to low uptake. Many patients and consumers won’t wish to give up valuable personal data, which could help study or monitor COVID-19, if they think it will be used in ways they are not comfortable with, such as being sold or used by 3rd parties for other, non-public health purposes, or not kept under adequate security, lost or accessed by unauthorised attackers.

    Ethical or legal issues can also lead a project to come undone. A number of AI tools, from recruitment apps to facial recognition, have shown racial or gender bias due to bias in their training data. We are regrettably seeing that the virus disproportionately affects certain ethnic and social groups. If this bias in the data feeds into model development and training, we could end up with diagnostics tools which learn to spot markers of ethnicity rather than markers of disease. Racially biased misdiagnosis is a sure-fire way to quickly get a project shelved.

    A framework for building and deploying trusted models

    These are all risks that can be overcome through good practice in model development, ensuring models not only work first time, but are widely and correctly used. Here are our five key parameters for creating trusted AI taken from our Trusted AI framework.

    1. Assured: Trusted AIs must use a well-designed model, and be trained and tested on data that is proven to be accurate, complete, from trusted sources, and free from bias. Capturing that data requires rigorous processes around data collection and curation, as we discuss in the first article of this series.

    2. Explainable: A recommendation is much more useful if you understand how and why it was made. A good AI will have tools to analyse what data was used, its provenance, and how the model weighted different inputs, then report on that conclusion in clear language appropriate to the users’ expertise.

    3. Human: A trusted AI is intuitive to use. An intuitive interface, consistently good recommendations, and easy-to-understand decisions, all help the user come to trust it over time. The complexity of the interface needs to be suited to the user’s knowledge; a track and trace app will look very different from a drug discovery platform.

    4. Legal and ethical: A trusted AI should reach decisions that are fair and impartial, meeting data protection regulations and giving privacy and ethical concerns equal weight to predictive power.

    5. Performant: A trusted AI continues to work after deployment. A performant AI considers future throughput of data, accuracy, robustness, and security, whilst remaining aligned to genuine business or policy needs.

    For more information, our whitepaper on Trusted AI explains the challenges and offers a more detailed look at this framework.

    A final key point, which is critical for trust and usability, is designing models that are deployable and scalable. We'll discuss this in our final article in this series.

    New call-to-action