Taking AI into the Enterprise: Overcoming Two Major Roadblocks

    Matt Jones



    Written for Forbes by Matt Jones, Lead Analytics Strategist at Tessella and member of Forbes Technology Council

    Any sensible AI project starts under lab conditions using carefully selected test data. Since AI needs to be trained to know how to interpret data, it would be madness to let an unproven AI loose on the enterprise. But at some point, the AI reaches a level where it needs to be applied to real company data — customer databases, machine readouts, IoT device data streams, etc. — and start being used to inform important decisions.

    After some rocky starts, many enterprises are getting quite good at the first step, which is building an AI that solves a defined problem. But many still run into trouble when moving the AI out of the "lab" (or their siloed team of data scientists) and into the wider enterprise.

    That step comes with a number of challenges, but from a management point of view, we can break them down into two key blockers: technology scaling and user acceptance.

    Technology integration and scaling: Can your AI cut it in the real world?

    This is an increasingly common situation: A company creates an AI and has data ready to feed into it, but it has nowhere to put the results. It’s like building a new, ultra-efficient hydroelectric and agreeing to a river to capture energy from, but then discovering it’s too big to install anywhere useful.

    AI proofs-of-concept (POCs) are frequently built by data scientists in a lab with little consideration given to deploying them into a production environment with tightly defined and controlled support procedures and technology constraints. For example, AI is often built in Python or R, the data science programming language of choice. Data scientists then present their rigorously tested achievement to IT and ask them to roll it out, only to be told that the technology the entire enterprise runs on does not support that language.

    Even when IT teams have the capability (or are willing to develop it), many AI tools are underpinned by new technologies that represent a high technical risk. Many are highly specialized and require complex configuration and setup to get the most from them, which can take weeks or months. By that time, the momentum is lost. This is further complicated when — as is usually the case — they need to be integrated with other enterprise technologies (e.g., expert applications, storage infrastructure, workflow or CRM systems) that they were not designed to work with.

    This was understandable as recently as a couple of years ago when AI was new and there was value in exploring and experimenting and not placing limits on data scientists. Now, we need to start thinking longer term, and data science projects need to be planned with the end user in mind.

    Some turn to black box enterprise AIs, which can work well for generalized problems faced across industries. However, these present the problem we will come to next: If the user can’t see how it works, they may not trust the result. When an AI needs to solve a very specific problem, a better approach is to build bespoke AI POCs using common toolkits, such as Microsoft Azure and AWS, that naturally scale into the enterprise. By using the same tools to build and deploy the AI, companies eliminate time and complexity associated with configuring, installing and integrating new AIs.

    Integrated frameworks support data scientists in building, testing and validating models while allowing them to scale into production systems once they're successfully proven.

    User acceptance: Trust without understanding

    A second reason for AI failure is user reluctance to adopt it. This can happen for any of the usual technology failure reasons: It takes too long. It's too complicated. The UX is poor. All of these need to be considered in the design of the AI. But with AI, there is also the critical issue of trust.

    Imagine you are a chemist looking for molecules for a new drug. You have created a specification and fed it into the AI, and the AI has pumped out a hundred candidate compounds. In theory this is great — your search of billions of potential molecules has been narrowed down, and you can spend your R&D time testing this much smaller number of candidates.

    But it's only useful if the chemist trusts the result. If they don’t understand how it reached the decision, they may not feel confident relying on it and will have to redo all the AI's work through other methods. (We are assuming, incidentally, that the AI has been well-designed and tested. If it has actually reached the wrong result, that's a whole different problem. But designing AIs right in the first place is a subject for another article).

    The solution is to involve users early on, requesting that they supply their own training data — e.g., molecule specifications and desired activity — and guide the validation of the AI outputs. This allows them to shape its development and see firsthand that the AI reaches a meaningful result when the answer is already known. This helps them understand how it reaches its answers and how to use it as it develops.

    AIs are improving. Now make them usable.

    AI capabilities are advancing apace in many enterprises, and AI is increasingly addressing many complex problems successfully. The challenge now is getting proven AIs out of the lab and into the enterprise. This requires forward planning and closer collaboration between data scientists, expert and non-expert users, and the business function. AI capability has come a long way. We must now ensure POCs are designed to work for the end user and are built to scale within that enterprise’s specific IT and human infrastructure.

    Original Source: Forbes