The Three Stages of Enterprise AI Success and How to Actually Get Them Right

Tessella

Topics:

AI Data Science

Stage 1: Build your AI

To ensure AI delivers, each project must be approached in a way that maximises chances of success. In our new whitepaper, The Three Stages of Enterprise AI Success, we discuss three interconnected steps for doing so: build the model, prepare the data; and deploy it correctly into the enterprise. Here we discuss the first of these.

Building an AI model

At the heart of any AI is a mathematical model. Unlike software, models are not a set of defined rules (eg, if x then y), but frameworks for learning from specific types of data.

For example, a rail company may have data on power consumption, wheel damage, and other variables. Using this, a model can be built to understand in what situations wheel damage is correlated with power consumption changes. A good model will also look at other variables and learn to rule out power changes not connected to wheel damage. The model can then spot wheel damage – which is hard to measure without taking the train out of service - based on easy-to-measure changes in power usage.

To develop AI models, we need data scientists. And to ensure AI models deliver against business goals, these data scientists should progress through a series of logical steps. This stage-gated approach allows rapid experimentation to quickly reduce many ideas down to the best ones, and spot dead-ends before costs spiral. This is essentially the famous ‘fail fast’ approach which has been integral to the success of today’s tech giants.

Three Stages White Paper

A data science framework for enterprise-ready AI

The data scientists must start by defining the problem, proposing a clear hypothesis for how data can solve it, and check there is sufficient data available to build and train an accurate model. An example could be: We want to use power consumption data to inspect for wheel damage. We will explore whether there are clear data patterns in power consumption that correlate specifically with wheel damage. We need large enough data sets from trains with and without wheel damage over the same route.

Then they need to check the data has the necessary attributes to produce the insights it needs to generate. For example, if the insight is predicting future sales, there will need to be a data on past sales and factors that have influenced those sales, in order to derive any meaningful predictions.

Next, they must isolate variables which drive the outcome, from variables which are incidental or have separate causes. An initial investigation may find raising temperature increases the yield of chemical production. Is this a direct effect, or is the temperature change driving something else, such as the efficiency of the catalyst? If the latter, it may be much cheaper to add more catalyst than increase temperature.

This focus on causal links between variables and outcomes is vital, but often overlooked by so-called data experts not trained in the scientific method. Too many data projects look only for correlations and assume they are connected.

Once they have useful data, they identify the right tools. These could be AI or machine learning algorithms, or statistical models. There is no single answer; an experienced data scientist will have seen enough to know what options are best for the job, whether it is adapting previous algorithms or creating new ones.

Many AI projects fail because users can’t trust the resulting insights. By bringing a deep understanding of the problem, the data and the technology – AIs have trust baked in. Their creators can show users that they work – and explain how they work, not just point to the results and ask the user to trust it.

Robust models, ready for the real world

The final step is to develop and build the model, producing a proof of concept.

This is not the end of the story. Models need to be rigorously tested, first using curated training data, then on real world data under test conditions, and finally let loose into a real-world production environment. Throughout this process there must be continued testing and refining to improve accuracy of outcomes and modify models as new data becomes available. We will discuss this further in our next article.

This article is part of our ‘Three Stages of Enterprise AI Success’ series.

Download the full whitepaper here