Whilst some companies pontificate about how AI will change the world, others are actually rolling up their sleeves and building and deploying AIs that solve their business challenges. If you are one of these trailblazers, you are probably now starting to think about the realities of having AI in your organisation.
Like Information and Communications Technology (ICT) before it, AI needs ongoing supervision, maintenance, and checking throughout its life. But it is very different from ICT in how it behaves, and the consequences of its decisions.
As AI becomes a key part of organisations, a whole new support infrastructure will be needed. This will increasingly need to be planned into AI from the start of its development.
How AI Works And How It Can Go Wrong
Before we discuss what to do, it is important to understand the unique challenges AI presents.
Unlike software, AI does not follow clear rules, but learns how to interpret information by establishing connections between different data sets. At the point of the deployment, the development team will be confident that it has learned to interpret a specific set or data inputs (eg machine readouts, customer buying habits, medical scans) with sufficient accuracy to deliver a useful business outcome (eg component failure prediction, sales targeting, medical diagnosis).
However, most AIs will ingest new data over their lifetime to continuously learn and adapt. If there are problems with incoming data, or the AI encounters situations outside its training, it may start making unpredictable decisions.
For example, an AI that manages road traffic may suddenly find an unusual event it has not been trained on, and redirect vehicles through routes that cannot handle them. A loan decision algorithm may be fair at launch, but over time learn that a particular subgroup that never occurred to the designers (people named Matt, for example) are by coincidence statistically less likely to pay back loans, and start making unfair decisions against other members of that group.
Added to this complexity is that not all strange behaviour will be wrong. AIs can make surprising decisions not easily understood by the current business mindset – but which are still good decisions. A NASA model successfully created an antenna design that would never have occurred to a human, but was better than anything a human came up with.
These must all be viewed in the context of the business benefits – AI undoubtedly has huge potential to transform businesses, just as ICT did. But benefits come with new complexities, which must be managed.
Planning the AI-Enabled Organisation
AI therefore needs ongoing oversight, post deployment. This is not just a case of patching and updating. There needs to be an ongoing process of actively reviewing data inputs, checking it is performing as expected, and re-training.
Ideally this should be planned at the concept stage, even before the models are built, and approaches developed for ongoing support and maintenance.
This planning should cover:
- Expand enterprise IT governance frameworks to AI, covering budgets, people, and data
- Specify how an AI will be operationalised (who will use it, what for, what data it is going to consume, what decisions it will make)
- Create Standard Operating Processes (SOPs) for how to detect when an AI is operating poorly or inconsistently (or dangerously), who is monitoring, and who raises the alarm.
- Establish how it will be supported and monitored by a 'human backstop'
- Assign a chain of accountability, and agree who this stops with (CIO, CDO, CAIO, CEO?)
- Assess whether its decisions will affect company responsibilities under data or industry regulations, what reporting is needed, and how AI decisions align to industry rules.
In our next article we will discuss what ongoing AI support actually looks like.