Artificial intelligence offers numerous benefits, but there are also pitfalls
Artificial intelligence (AI) is one of the technologies that will dominate the business, consumer and public sector landscape over the next few years. Technologists predict that, in the not-too-distant future, we will be surrounded by internet-connected objects capable of tending to our every need.While AI development is still in its early stages, this technology has already shown it's capable of competing with human intelligence. From challenging humans at chess to writing computer code, this technology can already outperform people in many areas. Newer AI systems can even learn on the fly to solve complex problems more quickly and intuitively.
But while AI presents many exciting opportunities, there are also plenty of challenges. Doomsday scenarios predicting that smart machines will one day replace humans are scattered across the internet. Speaking to CNBC, respected Chinese venture capitalist Kai-Fu Lee said AI machines will take over 50% of jobs in the coming decade.
Although businesses are ploughing billions of dollars into this lucrative market, many of the world's most prolific figures in innovation and science have called for regulation. Tesla founder Elon Musk and renowned physicist Stephen Hawking are among those who have voiced concerns over the rise of artificial intelligence.
How real these concerns turn out to be remains to be seen, but even now there are ways in which AI used in business can pose risks not just to the companies that use them, but to the public at large.
While organisations at the cutting edge of AI development should at spend at least some of their time preventing the rise of the machines, everyday organisations have their role to play as well in protecting us all from artificial intelligence gone awry.
One solution doesn't fit all
Automated technologies are incredibly diverse and span a range of use cases. As a result, it's quickly apparent that there isn't one simple answer to ensuring the safety of AI. Matt Jones, lead analytics strategist at technology consultancy Tessella, says keeping AI safe comes down to the data a business possesses. "It's important for businesses to remember that there is never a 'one-size fits all' solution. This all depends on the data at the company's fingertips – this will influence the risk involved, and therefore how dangerous the wrong decision can be," he says.
"For instance, using AI to spot when a plane engine might fail is a very different matter to trying to target consumers with an advert for shoes. If AI for the latter goes wrong, you may lose a few potential customers, but the damage isn't long term. However if the former goes wrong, it could lead to fatal consequences.
"There is however a series of steps businesses can take to ensure that AI works for the specific application it is required for. This includes having access to the right people to initially turn the data you're using into organised and correctly structured data that will help avoid issues once the AI platform is up and running."
To get the most out of data and analytics, Jones explains that companies need to invest in the right talent. By doing this, companies can avoid disaster scenarios and avoid human errors. "Understanding the risks involved and partnering with AI experts to describe basic governance processes to ensure safe decisions are continuously made. Human oversight of any decision an AI makes is vital as it's this oversight that will determine if corrective measures are required such as retraining or remodelling. For example, a company might take random samples of AI outcomes and cross-reference them against the corresponding human decision in order to keep it in check," he explains. Read article in full
Original source: ITPRO