To ensure AI delivers, each project must be approached in a way that maximises chances of success. In our new whitepaper, The Three Stages of Enterprise AI Success, we discuss three interconnected steps for doing so: build the model, prepare the data; and deploy it correctly into the enterprise.
Here we discuss the third of these:
Data scientists build models, and data engineers ensure the data is valid. But a model is only useful if it can be used.
Netflix’s film recommendation engine would be no good without its user-friendly website. Similarly, enterprise AI needs to be wrapped in software which can be integrated into the company’s IT or OT (Operational Technology), so it can be deployed into the production environment.
Lots of enterprises run into problems when productionising AI models. Two problems consistently arise: usability and integration with IT architecture.
Make Sure the AI User Interface is User-friendly, or No-one Will Use It
The final model needs to be presented with a user-friendly interface, usually through a web page or app, which the user logs into.
A good interface prompts the user through guided decisions. The software runs, collects data from IT systems, executes the model, and presents the resulting insight to the user. The complexity of these ‘guided decisions’ needs to be suited to the user’s knowledge – a movie recommendation app will look very different from a drug discovery platform.
Trust is also critical for usability. In some cases, users can learn to trust AI decisions over time (such as when a train is developing a fault) by seeing that results consistently correspond to human expertise or inspection. In others, users need to understand, and even explain, how the model reached its decision. Bankers using risk assessment models to automate lending decisions still need to explain why a loan was rejected. In such cases, the software must be able to capture what data was used, its provenance, and how the model weighted different inputs in reaching its conclusion, then report on the conclusion in clear language.
Did You Build an AI that Actually Integrates into Your Company’s IT System?
Models built by data scientists often use cutting-edge tools and languages not familiar to the enterprise. A small flexible data science unit can embrace new technologies with more ease than corporate IT, and is more likely to prefer programming languages designed for data science. Making models work with existing IT is therefore a challenge.
One way to overcome this is to require data science teams to build models which integrate. Cloud environments such as Microsoft Azure and AWS can be setup to reflect the enterprise’s infrastructure and provide common toolkits which easily integrate. This allows models to be built in a simulated enterprise environment, so they can be easily transferred over to the real one. This needs to be planned in advance, not once the model is ready.
However, these tools come at the cost of flexibility. More complex models require more sophisticated data science tools, leaving them in a format which doesn’t naturally integrate. The solution is usually ‘containerisation’; wrapping models in software (‘containers’) which translates incoming and outgoing data into a common format. The model then runs in isolation in the container but slots into the wider IT ecosystem.
Models also need to be allocated sufficient resources. Models vary in power and compute demands. A drug discovery model may process petabytes of data from libraries once per month, whilst a train fault prediction model may process a continuous stream of data from remote sensors. The former will require dynamic cloud-based storage and real-time access to scalable levels of compute power, whilst the latter can run on more stable infrastructure. Security and regulatory compliance around where data is stored and processed must, of course, also be considered.
Do the Data Guys and the IT Get Along?
Whilst data and IT are often spoken of in the same breath, they can make strange bedfellows. Enterprise IT involves numerous portfolios of projects requiring detailed, long-term planning, and following established processes and standards. This doesn’t always suit nimble and constantly evolving AI and data based workstreams, and data science projects often fail to integrate as a result. More communication between the two groups, early on - and a greater understanding on both sides of the others’ challenges - would go a long way to avoiding these problems.
This article is part of our ‘Three Stages of Enterprise AI Success’ series. Download the full whitepaper here