COVID-19 Models 4: Deploying Robust Models At Scale

    Dr Matt Jones

    Topics:

    COVID-19 Series

    What makes a successful model?

    For a model to be successful it must be reliable, accurate, trusted, scalable, and fit for purpose for use in the real world. Get it wrong and you undermine any potential that your AI proof of concept has previously shown.

    Building intelligent systems that perform at scale is not as simple as writing an application and releasing it as a simple download. Robust solutions demand the correct infrastructure, and having this in place to support operation of the system and delivering results to the people who need it most and at the right time.

    Control your data. Save lives.

    A simple example involves engineering the prototype model into fully functional, robust software solution and integrating it into the end user ecosystem; be it a phone or web app, a website or bot, or embedded within a dedicated medical device or diagnostic instrument.

    If all goes well with our strawman, then the user is presented with a clear interface that is simple to understand and work with. The data is collected, which may be location or symptoms in the case of a disease tracking app; or desired pharmacological properties entered by a scientist for a series of drug-like molecules in the case of a candidate drug discovery platform.

    The model receives and acts upon this information, then runs and presents the resulting insight to the user. It sounds so simple, yet this is where a lot of proof of concept models fail to reach their intended potential. And remember, even the most effective model in a test environment is not adding any value unless it's in active use.

    Virologists and epidemiologists – even data scientists - are not often professional software engineers. Those building models don't always appreciate the rules and complexities of enterprise architectures, or public, government infrastructure and the governance that surrounds their use. There is often a mismatch in expectations and language between the domain, modelling, and IT functions.

    Given these challenges, how can we quickly productionise trusted, scalable, robust AI solutions into everyday use?

    Maximise the capability of your experts to fight COVID-19 and prepare for a  post-crisis future. 

    Developing models into real world solutions

    Models built by data scientists may use cutting-edge tools and languages not familiar to the enterprise at large. This is very prevalent in maturing digital businesses. A smaller, more agile data science unit can embrace new technologies with more ease than corporate IT, and is more likely to prefer programming languages designed for data science. Making models work with existing IT infrastructure, support teams, and more is therefore a challenge.

    One way to overcome this is to start the project by requiring data science teams to build models which integrate. Cloud environments such as Microsoft Azure and AWS can be set up to reflect the enterprise’s infrastructure and provide common toolkits which easily integrate. This can be a big timesaver if it's possible.

    However, complex models may necessitate more sophisticated data science tools, leaving them in a format which doesn’t naturally integrate. The solution is usually ‘containerisation’; wrapping models in software (‘containers’) which translates incoming and outgoing data into a common format. The model then runs in isolation in the container but slots into the wider IT ecosystem.

    Models vary in power and compute demands. A drug discovery model may process petabytes of data from libraries once per month, whilst a track and trace app may process a continuous stream of big data from millions of devices. These need to be allocated correctly or it will slow down deployment and could alienate early users. Data security and regulatory compliance surrounding these systems must also be considered and managed. Again, this can't be underestimated.

    It doesn't end with deployment

    Slotting the software into the IT systems is not the end of the story. Models need ongoing maintenance and support to ensure they keep working and improving. This is often specific to the model so cannot simply be left to the IT helpdesk. Post-deployment monitoring should cover:

    • Retraining and modifications: There is a need to continually identify and efficiently pipeline new data sources, especially in these uncertain times where we are learning all the time. Disease spread models need to be updated as more is learned about modes of transmission. Clinical trial prediction models need uncertainties to be replaced with real data as it is collected throughout the trial.
    • Spotting and responding to errors: If a model starts to deliver unexpected or incorrect results, someone needs to be able to spot it and intervene immediately – and this should be a human who understands the system from the ground up. We call this ‘the human backstop’. There needs to be Standard Operating Processes developed and in place for continuous feedback around model outputs, assessing them with human expertise and explainability tools so that any corrective action can be taken in the immediacy and maintaining trust and confidence in the user communities.

    Responding to evolving threats: Even at times of national emergency, there will be criminals and activists looking for ways to attack models in order to gather sensitive information or cause disruption to their operations. Those responsible for intelligent models, particularly public facing ones, need to have processes and technology in place to detect confounding inputs – whether malicious or accidental.

    New call-to-action