The 3 Stages of Enterprise AI Success




    Download PDF Version



    Download the Whitepaper

    The journey to enterprise AI success

    People often tout AI as transformational, but most successes come from specific improvements offered by well-defined AI projects. Examples include helping pharmaceutical researchers discover new drug formulations, or enabling train operators to predict faults long before they happen.

    It’s only the combination of individual AI successes across an enterprise that can truly deliver transformation.

    Others are costly failures. There are many moving parts to making an AI project successful. After a few years experimenting, many organizations are now competent at building AI models, but many still struggle to get them to deliver value to their organization.

    To take advantage of this rapidly maturing technology, enterprises need to understand how to run an AI project from start to finish. There are three interconnected steps to achieving a successful AI project:

    • Build the AI
    • Train the AI with the right data
    • Deploy the AI effectively across the enterprise

    Let’s discuss these three stages individually and explore the different skillsets required to make each step a success.

    AI is no longer a ‘Future Technology’ – it’s here, and organizations are using it to deliver real results.

    Stage 1: Building your enterprise AI model

    At the heart of any AI is a mathematical model. Unlike software, models are not a set of defined rules (e.g. if x then y), but frameworks for learning from specific types of data.

    The model is built by AI and data science experts and trained upon large quantities of well-understood data. For example, a rail company may have data on power consumption, wheel damage, and other variables. Using these inputs, a data scientist can build a model to understand the correlation between wheel damage and power consumption changes. A good model will also look at other variables and learn to rule out power changes not connected to wheel damage. The model can then spot wheel damage—which is hard to measure without taking the train out of service—based on easy-to-measure changes in power usage.

    The nature of the data and models can vary immensely, from how the human body responds to a drug-like molecule, to the structural integrity of construction materials, to resource estimations for major city infrastructure projects.

    To build a validated AI model that’s capable of delivering trusted results, you need experienced data scientists. Professional data scientists can examine a business problem and deduce the root causes or detailed data-driven insights required to solve it. This requires a scientific approach.

    enterprise-ready AI model

    How to build an enterprise-ready AI model

    To develop their models, data scientists should work through a series of logical steps.

    They must start by defining the problem, proposing a clear hypothesis for how data can solve it, and checking there’s sufficient data available to build and train an accurate model. An example could be:

    We want to use power consumption data to inspect for wheel damage. We will explore whether there are clear data patterns in power consumption that correlate specifically with wheel damage. We need large enough data sets from trains with and without wheel damage over the same route.

    Next, they need to check the data has the necessary attributes to produce the insights it needs to generate. For example, if an enterprise seeks to predict future sales, there needs to be data available concerning past sales, and the factors that influenced those sales, to derive any meaningful predictions.

    Once they’ve checked the data, scientists must then isolate the variables that drive the outcome from incidental variables. For instance, an initial investigation may find that a rise in temperature increases the yield of chemical production. Scientists must ask: Is this a direct effect, or is the temperature change driving something else, such as the efficiency of the catalyst? If the latter, it may be cheaper to add more catalyst rather than increase the temperature.

    This focus on causal links between variables and outcomes is vital, but often overlooked by data experts not trained in the scientific method. Too many data projects look only for correlations and assume they are connected.

    Now they’ve identified the useful data, they must now identify the right tools for analysis. These could be AI or machine learning algorithms or statistical models. There’s no single correct solution; an experienced data scientist will have seen enough to know what options are best for the job, whether that’s adapting previous algorithms or creating new ones.

    Finally, the data scientists can begin developing and building the model, testing as they go. Simple models may be tested with well-understood data sets, allowing the AI’s answers to be independently confirmed (comparing the wheel damage prediction with an actual inspection, for example). However, more complex models which detect rare events (e.g. bridge subsidence) or predict scenarios with many variables (e.g. how a drug will perform) will rarely have clean data sets to verify them. In these cases, training and test data must be carefully curated and labeled by experts, and they’ll need to continually test the model, making modifications as new data becomes available.

    This stage-gated approach allows for rapid experimentation, reducing many ideas down to the best ones quickly, and spotting dead-ends before costs spiral. This is essentially the famous ‘fail fast’ approach which has been integral to the success of today’s tech giants.

    user trust

    Ensuring user trust is baked into your AI design

    The rigorous approach described above is essential to ensuring your AI models can be trusted. A primary reason so many AI projects fail – especially those which use black box AI – is because users can’t trust the insights they produce.

    Imagine a chemist looking for candidate molecules for a new drug. She creates a specification, and the AI generates a hundred candidate compounds. In theory, this is great – she’s quickly narrowed down a billion potential molecules. But it’s only useful if she trusts the result. If she’s not confident she can rely on it, she’ll have to redo much of the AI’s work through other methods.

    By applying the above approach, your AIs will have trust baked in. You can show users that your models work – and explain how they work – by running demonstrations on training data.

    Conversely, a black box AI, or an AI model developed outside the organization, will have been trained on generic data and will require users to trust its results blindly. This may work for simple problems, but the margin of error and lack of user understanding will be too high for complex and high-risk scenarios.

    real world

    Checking your AI still works with real-world data

    If you follow the above steps successfully, the output is a proof of concept. But this is far from a final product. A model that works under lab conditions won’t necessarily work in the real world, any more than a drug that’s effective with mice will work with humans.

    This same approach of controlled validation of results and decision-making processes must be applied to operationalized AI. In the real world, new data may highlight flaws in the model or identify changes needed to deal with real-world complexity.

    For example, an AI model trained to recognize the difference between wolves and huskies may turn out to be making the call based on whether there’s snow in the background. This may have worked in the training data set, but the model wouldn’t be able to spot a husky in its natural habitat.

    So, the next step when building an enterprise AI is to monitor, improve, and evolve the solution using real-world data. Gathering and using this data, however, is a whole different ball game, to which we will turn next.

    Stage 2: Training your enterprise AI

    AI classifies data based on the relationships between many different interconnected factors. Unlike traditional software, which follow rules defined by software engineers, AI automatically formulates the rules from the data it’s trained with.

    For example, an AI model fed large numbers of images of different skin rashes can learn to spot each type based on their unique combination of characteristics without being told what a particular rash looks like.

    It can also find new links: a model can be told what pharma research is trying to achieve, then analyze molecule libraries to identify likely candidates without being explicitly trained on what to look for. In some cases, this can lead to approaches that no human would identify.

    NASA used generative algorithms to design an antenna against a set of criteria. The result would never have occurred to a human, but was better than anything else they came up with. Similar approaches are being used in drug design.

    In this complex landscape, it’s hard to cut through the noise and understand what AI can really do.

    AI is also good at isolating complex variables. For example, an AI can model the implications of multiple-drug regimens. For humans, when looking at patients taking multiple drugs, it’s too hard to isolate all factors and conclude that a particular interaction was having an adverse effect. But this is where deep learning shines. With enough data from large populations, AI can spot weak signals that show how and when specific combinations of factors lead to specific outcomes.

    Right now, AI is like The Wild West. There are lots of promised solutions, but little clarity. In some cases, AI is already delivering value, usually to projects where AI has been purpose-built for that challenge. There are many promising transformational applications of AI, though some may not be possible for years. Then there are other pursuits that are no more than overblown marketing claims.

    In this complex landscape, it’s hard to cut through the noise and understand what AI can really do for an organization. AI has huge potential, but – for now at least – it’s rarely easy to implement and only works with the right data, models, training, and deployment.

    mobile health app

    Common problems with enterprise data that’ll confuse your AI

    Data is drawn from disparate places. Common data sources include:

    • temperature sensors
    • machine monitoring devices
    • customer databases
    • mobile health apps
    • and more

    This data is often held in different formats, like structured engineering data, excel spreadsheets, images, notes, video and voice recordings, and so on. With so much data from so many sources in so many formats, all sorts of problems can emerge which data engineers need to resolve. Let’s explore a few of the key problems you’re likely to face.

    Data with inconsistent naming conventions

    A company might run diesel generators on multiple sites, each having many sensors capturing information—energy output, temperature, vibrations, and so on. Each team names each sensor according to their own naming convention. Different units may be used in different regions (feet vs meters) and sensors are often misnamed (‘Temp_1’ mistyped as ‘Temp-1’). The central data team ends up with many streams of inconsistently named data, making it hard to reliably feed them into models.

    Data can go missing

    Employees sometimes forget to upload important information or update databases. Sensors malfunction or machines are taken out of service, thereby creating gaps in the time series. The 2019 Ethiopian Airlines disaster started when a sensor failed, causing the plane’s automatic system to misunderstand what was happening. It took a course of action its training told it was correct based on available data, which ultimately doomed its passengers.

    Data from human decisions will reflect human biases

    Amazon discovered this to its detriment with its AI recruitment tool. The tool was trained on current employee CVs, mostly male, learning that systemic gender differences – from writing style to personal interests – were determinants of a successful hire. The result was an AI that dismissed women as unsuitable for the job.

    Data engineers must work to address these problems if AI is to be deemed trustworthy and reliable.

    messy data

    The importance of tidying up messy data

    The data engineer’s task is making data usable by AI models. Depending on the data source, this will require building systems to access the data, like APIs which extract data and load it into the desired database, from which the model runs. The data must be cleaned, removing corrupt or inaccurate records, and it must be properly structured and tagged so it conforms to the technical requirements of the target database. This allows the model to interpret it accurately.

    Once data is flowing, there is a need for agreed naming conventions and consistent data formats. For many organizations, this means considerable changes to existing data collection methods—or a lot of work for data engineers converting it into correct formats. Further modeling can help with this task.

    Faced with 100 differently named temperature sensors, supplementary models can be developed which identify typical features of temperature data. Metadata such as geo-tagging or timestamps can provide markers of consistency, allowing data feeds to be automatically relabeled and fed into the target database in comparable formats. Models can cope with problems such as inconsistent units or missing data, but only if the problem has first been identified and the model trained to deal with it.

    Although we present separate stages here, each is interlinked. Data scientists rely on data engineers for good data for their models, and data engineers need to work with data scientists to understand likely biases in the data. Both must be involved in continued oversight post-deployment to spot problems or changes in data and retrain models as needed.

    Stage 3: Deploying AI into enterprise IT

    Data scientists build models that create meaningful predictions, and data engineers ensure the data guiding those models is valid. But a model is only useful if it can be used.

    Netflix’s complex film recommendation engine would be useless without its user-friendly website. Similarly, enterprise AI needs to be turned into something usable in the enterprise. The final model needs to be wrapped in software which can be integrated into the company’s IT or OT (Operational Technology), so it can be deployed into the production environment.

    The right software allows users to benefit from AI models while shielding them from the complexity. Many enterprises run into problems when productionizing AI models as too little thought is given to how it will integrate into the business.

    Two problems consistently arise: usability and integration with existing IT architecture.

    movie site interface

    User-friendly AI is paramount

    Your final model needs to be presented with a user-friendly interface, usually through a webpage or app which users can log into. A great interface will then prompt users through a series of guided decisions.

    The software then runs, collecting the data from IT systems (as set up by the data engineers), executing the model, and presenting the resulting insights to users. The complexity of this ‘series of guided decisions’ needs to be suited to the user’s knowledge—a movie recommendation app will look very different from a drug discovery platform.

    Trust is also critical for AI usability; users must trust the model to reach the right answer. In some cases, users will learn to trust AI decisions over time by looking at the data and seeing that they correspond to human expertise or subsequent inspection. In others, users will need to understand, and even explain, how the model reached its decision. For example, bankers using risk assessment models to automate lending decisions still need to explain why a loan was rejected.

    In such cases, the software must be able to capture what data was used, its provenance, and how the model weighted different inputs in reaching its conclusion, then report on the conclusion using straightforward language.

    Design your AI to integrate

    Design your AI to integrate with enterprise IT systems

    Models built by data scientists often use cutting-edge tools and languages not familiar to the enterprise. A small data science unit can embrace new technologies with more ease than a corporate IT system and is more likely to prefer programming languages designed for data science.

    Making models work with existing IT is therefore a challenge.

    One way to overcome this is to require data science teams to build models to integrate from the start. For example, cloud environments like Microsoft Azure and AWS can be set up to reflect the enterprise’s infrastructure and provide common toolkits which easily integrate. This allows models to be built in a simulated enterprise environment and be easily transferred to the real one.

    This requires advanced planning. However, these tools come at the cost of flexibility. More complex models often require more sophisticated data science tools to get them to the proof of concept stage, leaving them in a format which doesn’t naturally integrate.

    The solution here is usually ‘containerization’; wrapping models up in software (‘containers’) which translates incoming and outgoing data into a common format. The model then runs in isolation in the container but slots into the wider IT ecosystem. This allows the benefits of complex software without compromising integration.

    Models also need to be allocated sufficient resources as they can vary in power and compute demands. A drug discovery model may process petabytes of data from libraries once per month, while a train fault prediction model may process a continuous stream of data from remote sensors. The former will require dynamic cloud-based storage and real-time access to scalable levels of compute power, while the latter can run on a more stable infrastructure. Security and regulatory compliance around where data is stored and processed must, of course, also be considered.

    Do data engineers and IT departments even get along?

    While data and IT are spoken of in the same breath, they often make strange bedfellows.

    Enterprise IT involves numerous portfolios and programmes of in-flight projects being run concurrently and require detailed, long term planning following established processes and industry standards. This doesn’t always suit nimble and constantly evolving AI and data-based workstreams. Data science projects unfortunately still have high failure rates as a result.

    More communication between the two groups early on – and a greater understanding on both sides of the others’ challenges – would go a long way to alleviating these problems.

    Delivering Successful Enterprise AI

    Bringing the stages together to build a reliable, trustworthy enterprise AI

    Successful AI deployments must begin with a clear understanding of the business vision of what needs to be achieved, framed in the context of appropriate technology and data selection. This underpins the evolutionary processes that must take place in a considered and ordered manner to create the model and software that delivers the value to the users and business as a whole.

    Each of the 3 stages detailed above needs to be fully autonomous, executed in a transparent and agile manner, and managed end-to-end as a whole. Failing to follow these stages leads to AI models that are unequipped to tackle real-world challenges.

    IBM Watson’s foray into cancer diagnosis and treatment regimes is a classic example of what happens when these 3 stages aren’t considered. Watson proved itself to be excellent at understanding language and classifying large volumes of image data. It worked well under lab conditions, rapidly assessing its vast database of clinical reference data to provide an evidence-based answer on how best to treat a patient.

    But when let loose in the real world, Watson fell short of its promises. Watson’s language processing failed to make sense of medical text, which includes ambiguities and nuances that human doctors are familiar with. It struggled to accurately interpret patient records, which were captured in inconsistent formats by different doctors. Finally, Watson’s process for reaching conclusions fell short of medical standards, which require strict criteria to be met before they can be accepted by doctors and health care professionals.

    Few AI problems are as high stakes as recommending cancer treatment. But at varying levels, all AI projects face the same problems:

    • Building a trusted AI model in the lab that produces accurate, validated results
    • Gathering the right data
    • Engineering and making it meaningful to the AI
    • Releasing the AI in a controlled manner into the real-world

    5 Key recommendations for deploying enterprise AI

    1. Identify and select experienced data scientists and subject matter experts to understand the true nature of the business opportunity or issue and ensure root causes are well understood.
    2. Take a scientific approach to problem-solving. Use logic and understanding to guide your AI solution’s design & engineering.
    3. Design and build business processes and human interfaces so AIs work seamlessly with people. Intelligent systems need a new approach to business change management and end-user training.
    4. Use a professional framework for building AI models, with stage-gates to force you to evaluate progress at critical points. For example, Tessella's RAPIDE framework has six stages to ensure AI projects:
      1. Are business READY
      2. Only use data that passes ADVANCED screening
      3. Look beyond chance correlations to PINPOINT the real factors driving outcomes
      4. IDENTIFY and evaluate multiple AI models, methods and toolset options
      5. DEVELOP models where trust is the equal of raw predictive power
      6. EVOLVE their capability upon contact with the real-world
    5. Carefully select tools and technologies that are compatible with existing enterprise IT operations.

    It's time to deliver on our promises

    When embarking upon AI projects, you must consider the three stages explored here, along with the broad range of skills and expertise needed for each: you must build a plan to align your business vision, the outcomes needed to make an accurate assessment as to whether you have the data, the right people and skills, and how the final product will finally work and add value to you or your customers’ organization.

    The fantastic opportunities AI offers, and, to some extent media hype, have promoted AI to the top of business leaders’ agendas. Huge progress has been made in proving the potential of AI. Now’s the time to think about the vital practicalities of making it deliver upon those promises.