AI promises to make our lives easier, our processes smarter, and our decisions more informed. But trust is one area still holding the technology back. Why is this the case?
AI trust is still a significant issue
94% of business executives believe AI is crucial to achieving their business objectives. But just 18% of businesses are true AI pioneers, actively deploying AI into their processes and solutions. While there are many reasons for the chasm between these two figures, a lack of trust may be the biggest.
Across industries, trust appears regularly as an issue:
- Only a quarter of consumers would trust an AI with deciding loan legibility
- Patients routinely snub AI in favour of human oversight in healthcare
- 48% of Americans say they’d never trust an automated car
The list goes on. Are people just inherently distrustful of AI, or are there aspects of the AI development process that could change to instil trust with the public and business leaders?
Transparency is paramount
People naturally distrust what they don’t know. And, as machine-learning algorithms get smarter and smarter, even their developers can lose track of how they’re working. If a developer can’t explain an AI’s deductions, the public is unlikely to trust its outputs.
This is where transparency comes into play.
Take Apple and Google as examples. They both take different approaches to how they integrate AI with their mobile devices; Apple tries to keep much of your data on-device while Google sends data to their cloud for processing. While there are discussions to be had about which is more ethical and secure, both companies are entirely transparent about their data processing policies. Because they’re upfront about their data processing, users know the facts. They can choose a device based on their preferences.
Compare this to Facebook, the social media giant that's always courting controversy. No one is quite sure about what data they collect, how they process it, or how it’s used. People understand their data informs the news and adverts they see in their feed, but no one truly understands how Facebook decides this. This considerable lack of transparency from Facebook has led to thousands of people now boycotting the platform.
To gain trust, be upfront and honest with your users. Inform them about:
- the data your AI collects
- how this data is processed
- what their data is being used for
Doing so generates trust in your brand and, by osmosis, your AI.
It’s also worth keeping human input and checks in-the-loop while demonstrating effectiveness and safety in the later stages of development. Doing so will increase the credibility of your AI system and show users that you’ve taken every precaution before releasing it for general use.
Design AI with end-users in mind
For AI to truly be trusted, it needs to be built collaboratively in a multi-disciplinary and culturally-diverse environment. Working more openly like this has many benefits:
- It will keep your end-goal of helping users in focus
- It will decrease the likelihood of human biases leading to AI biases
- It can lead to end products that better align with the user’s needs
Above all, a collaborative, user-led approach to AI design will build confidence – internally and externally. To get users on board, always design your AI with them at the forefront of your mind.
Ensure your AI’s purpose is trustworthy
According to the European Union’s Ethics Guidelines for Trustworthy AI, there are three critical components of trustworthy AI:
- it should be lawful, complying with all applicable laws and regulations
- it should be ethical, ensuring adherence to ethical principles and values
- it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
Let’s explore components 2 and 3 more closely.
An Ethical Purpose
For AI to be trustworthy, it must adhere to ethical principles and values. As discussed above, working in an open environment where diversity of thought is valued can go some way to ensure this.
Always ask yourself: is it morally and ethically right for an AI to be making these kinds of decisions?
Take mortgage lending as an example. An increasing number of banks are using algorithms to make loan decisions. Is this ethically right? Without a mortgage specialist assessing the AI's decision against each individual's circumstances – and the applicant having the right to challenge the decision – it arguably isn't. If it were to get into the press that specific financial institutions aren't acting ethically with their mortgage lending decisions, trust would likely deteriorate fast.
It can’t be overstated: for your AI to be trustworthy, it’s purpose must be ethical, with all possible hazards thought through and accounted for.
A Robust Purpose
When we say your AI must have a robust purpose to be trustworthy, we’re saying that whatever it’s designed to do must be thoroughly tested. You can then adjust and iterate on the AI based on the results of these tests.
The impact of failing to be robust enough in the testing phase can range from annoying to deadly. Netflix recommending a die-hard horror fan a rom-com is a pain in the neck, but a driverless car taking the wrong turn is potentially fatal.
As AI takes up a larger role in our day-to-day lives and begins making decisions of more significant consequence, trust is key to gaining acceptance. Help grow this trust by rigorously testing and retraining your AI model before it goes out to users.
Trust in AI is Growing
There will no doubt be more instances of AI going wrong and harming the technology’s reputation as it finds its feet in the coming years. However, as long as data scientists build AI’s transparently with the right processes and design, trust can and will continue to develop between the public and AI technology.
To dive deeper into the significance of trust in the AI space, and learn how to develop it, read our free AI + Trust Guide today.