How Do We Avoid the Threat of Supercharged Artificial Stupidity?

Matt Jones



The marriage of quantum computing and artificial intelligence (AI) is predicted to create a new breed of machines capable of analyzing vast amounts of raw data at an unfathomable speed. Quantum computers will soon be far more powerful than every supercomputer on the planet combined, giving machine-learning systems incredible horsepower. This conjures up visions of machines with imagination, able to autonomously make sense of unstructured and uncertain data, such as a drone that can predict obstacles it might meet in the air before its maiden flight.

Yet few have stopped to consider that quantum-enabled AI might actually amplify many of the flaws inherent in contemporary, current-day AI if we fail to quality-check the algorithm design and data driving it. We are already rearing a generation of prejudiced algorithms by training them on poor data and training regimes. Rather than furthering machine intelligence, the effect of quantum AI could be to create stupid AI on steroids -- a new breed of machines that are more dangerous because they have greater power and, potentially, greater responsibility than ever before.

A Bigger Brain Doesn’t Make A Smarter Machine

A common falsehood behind the quantum AI hype is the view that machines will automatically become smarter if we just improve their brainpower. However, while AI might give the outward impression of human intelligence, it has no consciousness or intuition, lacks any moral framework and cannot think outside the context of its learning environment.

This means AI is vulnerable to malicious manipulation in a way that humans are not. For example, AI cameras can’t actually recognize objects like humans can -- they just seek common characteristics among galleries of images that have been assigned tags such as "road sign." This means they can’t distinguish meaningful correlations from meaningless similarities. The camera on a driverless car could be tricked into thinking all road signs are green, thus ignoring speed limits on other signs if its training data only contained green road signs.

Prejudiced AIs

Machine learning systems can often reflect an unconscious bias in their training environment. ImageNet is a crowdsourced database of 14 million tagged images used to train many image-recognition algorithms, but it has a U.S.-centric slant because 45% of its images were uploaded by

Another form of human bias often inherited by machines is the fear of failure. For AIs, learning from mistakes is as important as learning from successes, but training datasets tend to be skewed toward success because those that manufacture AI platforms don’t want them to fail.

One AI that my company has been working on, which had been created to safeguard shipping lanes by scanning satellite images for icebergs, was taught to achieve high rates of accuracy by training it on perfect images of ships that were clearly distinguishable. However, this meant that if it encountered an image of an iceberg that had similar dimensions to those ships, it confused the two, threatening the risk of disruption in shipping lanes and the ships that navigate them. If an algorithm is not allowed to get it wrong during training, the risk is that it will get it wrong in real life when the stakes are significantly higher.

The Amplified Threat

All these dangers will be dramatically amplified if AI is given more power and responsibility before we improve the quality of the data it draws from -- and improve the data governance strategy overseeing it.

Consider a future quantum AI that might control energy supply across a smart grid energy network. Yet if it has been trained for success by teaching it to predict obvious stresses on the network, such as more people cooking at dinnertime, it won’t be able to deal with unexpected events, such as thousands of drivers charging electric cars during rail strikes. If it had been trained exclusively on usage data from homeowners of a Western cultural background, it might predict and fulfill extra electricity needs during Christmas but not other religious festivals. Even worse, because AI is vulnerable to deliberate manipulation, hackers could cause blackouts by inserting spoof data into the smart grid itself. Neural networks that exhibit irrational prejudices due to biased datasets and misconfiguration by its owners could even cause mass discrimination and civil unrest if they are scaled up to the quantum level and given input into everything from criminal sentencing to border control.

If we are to prevent this, we need a revolution in quality control for the data driving our digital economy. Data is the fuel of the modern economy, and we need universal ways of quantifying its quality just as oil is subject to lab testing to determine its quality and value. We have to ensure the data feeding our AI economy is cleansed of unwarranted bias, properly contextualized, understood and curated using scientifically valid methodologies tailored to the task at hand.

Just as we have standards for monitoring the quality of oil from processing plant to pump, we need scientific standards for traceability in the data economy so that we can monitor and maintain the trustworthiness of data throughout its life cycle.

This will ensure we don’t lose control of the datasets fuelling AI. Crucially, it will allow us to better understand how an AI derived its decisions so we can rectify any errors and take corrective measures. This must include checking that its datasets are built within the parameters of current scientific knowledge in the relevant field.

This would boost the public confidence on which our data economy depends and prevent more powerful future AI supercomputers from jeopardizing public safety, security and civic equality. It’s the key to ensuring the quantum AI revolution delivers on its social and economic promise.

Written for Forbes by Matt Jones, Lead Analytics Strategist at Tessella.

Source: Forbes