What Makes Us Trust The COVID-19 Models?

    Dr Matt Jones

    Topics:

    AI Data Science

    Trust in epidemiological models holds important lessons for AI and data science models everywhere.

    In March, the UK government, announced the strictest restrictions of freedom since the Second World War. Their decision was based on predictions from a mathematical model by Imperial College London that, without such measures, half a million people would die.

    Many similar complex models are informing policy around the world. Few non-experts understand the workings of these, nor their underlying biological and behavioural assumptions. No one thinks they are perfect. Yet a YouGov poll showed 93% support the resulting government measures (us among them).

    This raises an important question…Why?

    READ HOW TO BUILD AND DEPLOY TRUSTED AI IN OUR NEW WHITE PAPER.

    Why have governments, let alone the public, put so much trust in complex mathematical models?

    Our lives are full of models. Some we trust, some we don’t. We use Google Map’s models to plan the best way home, but are sceptical of models which drive cars. Over the coming years, we will take more and more decisions based on models, from choosing our diet, to designing new chemical formulations, to planning industrial maintenance.

    We need to be able to trust them. It is therefore worth looking at what factors made the COVID-19 models so well trusted.

    1. Accurate

    It’s clearly important that models are well-designed for accuracy, and that data being fed in is reliable. The world’s leading epidemiologists should be on top of this, though that is certainly not true of all modellers.

    Most users cannot judge how accurate the models are until their predictions have come to pass, or otherwise. Nonetheless, these models have gained huge amounts of trust from non-expert users. Clearly more than technical excellence is at work in achieving this trust.

    2. Explainable

    Models earn trust by backing recommendations with transparent explanations.

    Imperial created a lay summary of recommendations to the government. Critically, this wasn’t just instructions. It clearly explained assumptions around the speed of infection spread, and health and mortality risk. It showed the health burden if this went unchecked, and how that changed under various scenarios. It did not dictate, it provided clear information to enable government to take more informed decisions.

    Boy with robot

    Publishing the models, as was done in Holland and New Zealand, would allow even greater explainability and trust, as independent researchers could validate them and offer supplementary advice.

    Those acting on model’s advice do not usually need to understand its inner workings, but they need to understand how it reached its recommendations. A good model should explain what data was used, its provenance, and how the model weighted different inputs.

    This should be reported in the appropriate language. What that looks like depends on the complexity and seriousness of the resulting decision, and the expertise of the user. It can vary from a few words, to a detailed analysis, to a human expert decoding results.

    3. Human

    A trusted model must be intuitive for users.

    In the case of COVID-19, the modellers are involved in model design. They understand the model and the nature of disease spread, so can analyse results from a position of knowledge, and turn these into recommendations.

    Not everyone has the luxury of building their own models. Those who build them for others need to understand the limits of user knowledge. The interface must be suited to that user base, which will involve a trade-off between functionality and simplicity.

    Three Stages White Paper

    This online epidemic calculator, for example, offers a simplified interface to allow users to play with variables and see how they affect disease spread. Imperial’s model will allow for far more complex control of variables. At the other end, Netflix’s film recommendation engine has limited control of input variables, but is hard to misunderstand.

    4. Performant

    The model must keep working.

    The COVID-19 models have been regularly updated with new data to reflect changing social practices to generate new understanding of the disease, with outcomes reported.

    If a COVID-19 model suddenly showed something counterintuitive, eg that isolation was increasing spread, its expert users would sense something was up and check the model and the data, not immediately change policy.

    But not all models are being continually checked by experts. Unfortunately, many models are built in controlled conditions then launched into a company IT system or as an app. Many see accuracy drift, or do something strange, once unexpected data appears.

    A performant model is designed to work in the real world, with ongoing governance of data going in and model performance, checks of outputs, continual improvement, and security.

    5. Legal and ethical

    A trusted model must work ethically to earn and maintain trust.

    Data must be collected with consent. If models use personal and health data, it’s important people feel comfortable sharing it.

    Equally, data must be curated in a way that is not biased or unethical. Models analysing populations must be trained on data representative of those populations. Many models, for example, show gender or ethnic bias because their training data reflects wider discrimination in society.

    Disease spread models don’t face too many of such issues. But other models that will play a role in the fight against COVID-19, such as those being explored for track and trace apps, will need to get these right if they are to earn the same level of trust.

    Trust, not just accuracy, is critical to the success of data models

    At Tessella, we are used to interpreting and acting on the predictions of models. Our trust comes from knowing that the model was well trained and validated, with good data, by trustworthy authors, as well as our expertise to spot anything that looks odd. Outside of our scientific microcosm, it's clear that many are not very confident acting upon model predictions.

    Data scientists are often focussed on model accuracy. But as sophisticated models move out of the hands of experts - and into the lives of business users and consumers – we need to focus on how to earn trust for model predictions. Explainability, user-centric design, and ensuring it works in the real-world, are all key.

    As coronavirus changes the world, we are seeing how important complex models are to informing the most critical of decisions. There is much to learn from how epidemiological models have built trust.

    READ HOW TO BUILD AND DEPLOY TRUSTED AI IN OUR NEW WHITE PAPER.