In our previous article we explained that AI cannot just be deployed and left alone – it needs ongoing oversight, checking, modification, and retraining. It needs people and processes to monitor AI decisions, assess whether there is a problem, and react quickly if there is.
In this article, we discuss what ongoing AI support must cover.
Ongoing retraining and modifications: New data sources appear all the time and AIs can continually learn from these to reflect changing circumstances (eg customers change behaviour, operations introduce new technologies). But these will only be valid if the incoming data is correctly captured, cleaned and labelled.
The longer an AI is running, the more real-world data (including user feedback) is available. This allows improvements that may not have occurred pre-deployment, but also necessitates regular checks to confirm it is performing as expected. AI retraining is likely to be a regular occurrence, and will require careful data management and domain understanding to detect, limit and eradicate unwanted bias.
Spotting and responding to errors: If AI goes wrong, there needs to be someone who can spot it and intervene. Sometimes a wrong decision will be obvious; a chatbot starts insulting users, a book seller starts recommending explicit content. Other times there will need specific domain knowledge to spot a problem – if an AI is recommending optimal temperatures for chemical reactions, it will take a competent chemist to spot whether it is reaching illogical conclusions. AI support will need to monitor decisions, assess them with human expertise, and initiate corrective action, as defined in the SOP.
Responding to evolving threats: As AI evolves, so will new threats – as we saw with cyber security. Troublemakers, activists and organised criminals will look for ways to trick AIs with misleading data in order to gather sensitive information or embarrass companies. Those responsible will need to have processes to detect attempts to confuse AI, and spot when an AI is making a decision based on deliberate erroneous inputs. They will also need to redesign AIs as threats evolve, to automatically spot remove offending data points from their decision-making process.
Aligning with regulation: Regulation will be a big deal for AI. If an AI is responsible for the use of people’s private data - or is affecting a highly regulated process like a clinical trial or a financial investment – that needs to be managed. Processes will be needed for reporting AI decisions and how they were reached – including erroneous ones – to regulators.
How will AI support affect organisational structures?
Ongoing AI support will create new roles, and demand new skills as part of business-as-usual operations. It will need people who understand tech and IT infrastructure, but also who understand the underlying nature of data, and the business context of AI decisions.
There will be a need for a new role of ‘translator’ in any organisation with a burgeoning AI capability. These are people embedded in the organisation who speak the language of AI and of the business, and who can communicate across the different teams.
For example, if an AI decides to dynamically reprice a product, or reject a loan, or change chemical reaction conditions – these people will need the domain expertise to understand whether that is a good or bad decision. But they will also need to be able to explain the problem to the data scientists who need to look under the hood and work out why a neural network reached a specific decision.
A major consequence of this change will be in selecting an appropriate support partner. It is unlikely, in the short term at least, that traditional outsourced IT support will have the skills to handle AI, so new partners with specific AI expertise will need to be identified.
What metrics are needed for AI support?
Traditional IT support usually has a series of well-defined metrics relating to fixing problems, such as number of tickets, time to repair, etc.
AI support KPIs will need to be more nuanced, and relate to business impact. They should assess AI’s impact and hold it accountable for failure – for example looking at sales, downtime, waste, or throughput, rather than numbers of repairs. It will be a more business focused approach than most IT support, and as such requires different skills. Part of developing good AI governance will be creating ways to measure AI’s impact, so assessments can be made of how well it is delivering. This will allow AI support to spot problems, but also ensures its developers can continually understand limitations and find improvements.
Ongoing AI support is vital for tomorrow’s AI-enabled organisation
As organisation drop AI into their ICT infrastructure, they will need to rewire their support to handle the challenges it will create. This is challenging and needs new skills, processes and governance models embedded in the organisation, delivered through people with different skillsets and new ways of thinking. Smart organisations will recognise these long-term challenges, and start planning for them early.