Three Causes of AI Bias and How to Deal with Them

Sam Genway

Topics:

AI

In past decades, AI worries have moved from whether it will take over the world, to whether it will take our jobs. Today we have a new, and justifiably serious, concern: AIs might be perpetuating or accentuating societal biases and making racist, sexist or other prejudiced decisions.

The potential for biased AI is real. Probably the most famous example is the Amazon recruitment AI, which disfavoured applications from women because it was trained on CVs from the current, mostly male, workforce. Other examples have seen algorithmic predictions of criminals reoffending used by US courts come into question. 

Algorithmic bias can lead not just to unfair decisions, but unfair representations of certain groups. For example, societal biases can be exacerbated by the results of search engines or translation services, even if no single person is the victim of a prejudiced decision.

How does bias come about and what can we do about it? In our view, there are three opportunities for AI to develop bias: data, algorithms and people.

Pull Away from the Pack with Our Guide to Maximising AI’s Value

Bias source 1: Data

The first place to look for bias is in the data used to train the AI system. It’s well-understood that the data for AI systems needs to be of sufficient size, and representative of real-world use. However, even the largest datasets, often gathered from real-world decisions, often reflect human subjectivity and underlying social biases.

AI systems don’t have an innate understanding of whether their training data is objective (like a temperature measurement in a manufacturing process), or subjective (like the decision to offer someone a loan). If biases enter the training data, they will be learnt by the algorithms.

The Amazon AI is a good example. Another is translation services, where translating from non-gendered languages into, say English, would automatically refer to doctors as male and nurses as female. This is because it was trained on large data sets of written English which reflect societal biases. It is correct to note that statistically more nurses are women (for now at least), but not correct to assume all nurses are women.  This is something Google has only recently fixed.

The more serious the decision, the more serious the problem. An AI system may conclude from available data that certain groups are more likely to reoffend or default on loans. Whilst this may be true at a group level, it does not mean an individual from that group is more likely to do so. An AI using this as a decision-making factor creates undue prejudice.

It is common to think that the solution is simply a matter of removing race/age/sex/etc from the data set. If it were that simple, we would have solved the problem already. Data such as a CV or a loan application is rich, and contains many factors that are proxies for these protected groupings, such as where people live, interests, writing style, spending habits, etc.

Eliminating all these sources of bias, whilst retaining meaningful data, is challenging. But it is the first approach which an AI engineer has at their disposal to create an unbiased system. Where the training data is likely to be subjective (eg past human decisions), this requires thinking about what inputs to include in an AI model, rather than starting with everything then identifying the proxies for protected information that need to be removed.

There are other reasons why data can lead to models which exhibit bias. It might be that the data isn’t representative of the real world, or it that there is insufficient data for the model to make equally good predictions for all people. There is a lot that an AI engineer should do to understand the data before they begin training a model.

Bias source 2: Algorithms

Whilst algorithms don’t inject bias when there is none, they can amplify biases in the data.

Take the case of an image classifier trained on a selection of publicly available images of men and women, which happens to show more women in the kitchen. The AI has been designed to maximise accuracy, so it looks at all factors that can help it reach a decision. It may decide that all people in kitchens are women to improve its overall accuracy, despite the fact that there are some men in kitchens in the training data. It thereby incorporates the gender stereotypes of those who took the photos. In improving accuracy against the training data, it increases bias.

It is possible to build algorithms to reduce bias by telling them to ignore certain information such as background artefacts that indicate kitchens. Or to change the model so it only uses certain information to reach its result, such as just the face. Or we can run initial models, then assess them afterwards, in a post-processing step, to look for sources of bias and make corrections.

As discussed earlier, there are lots of proxies for certain groups, so it is hard to build a model that is aware of all potential sources of bias. Having recognised the example of women in kitchens, it is easy pre-process the data to avoid this information being used by the algorithm. But that’s only applicable once you’ve identified the kitchen as a source of bias. What about all the subtle forms of bias you haven’t identified? Relying on data pre-processing and post-processing can be limiting.

An alternative is to train the model to identify when it is learning a bias, and suffer a penalty for doing so – much as algorithms suffer penalties for making incorrect predictions, which influence them to improve their performance.  This requires us to identify what it means to us for the model to behave fairly, and provide this objective to the algorithm.

Bias source 3: People

The final issue is with the people developing the AIs. Those designing AIs are often laser-focused on achieving a specific goal. They aim to get the most accurate result with the available data, but they don’t necessarily think about the broader context. Equally there are experts in bias and ethics who could offer valuable insight here, but they are not necessarily the best AI trainers.

There is a need for an increase in contextual awareness in AI developers, and an acknowledgement of the need to involve experts in shaping AI. Just as someone building an AI to predict jet engine failure would work closely with an aerospace engineers, so must those automating decisions about the fate of human beings consult experts in ethics, law, HR, policing, etc. There is also a need for greater education of AI engineers on ethical matters, an area seeing an increasing focus in academic courses.

We also need to ask what it means to be unbiased, or fair. We can return to the ‘reoffending prediction’ algorithm, which gained widespread attention when ProPublica performed a study of bias in the COMPAS system - which provides risk scores for reoffending.  A lengthy debate ensued with both academics and the creators of COMPAS vehemently challenging ProPublica’s analysis as to whether the system showed racial bias. 

The disagreements boil down to what we define to be fair, which itself is subjective.  It turns out that there are many definitions of fairness and in general no AI system can satisfy more than one at the same time. Before we can test whether an AI is behaving fairly, we need to decide what unbiased behaviour looks like. This is fundamentally a human challenge.

Dealing with bias in AI

The three sources of bias are related, and ultimately, they all come down to people, as they are the ones that build the AI and select the training data. Upskilling AI engineers to pay more attention to how an AI decision affects people lives, rather than just model accuracy, would help.

To tackle bias, they need to understand its sources. To reduce bias in data they need a true understanding of the underlying data and the hidden values and human nuances it represents. They should be ready to question bias at every stage and identify where there is need for external expert input.

Essentially, this comes back to having sound methodologies for building models, which require clear objectives to be set, rigorous assessment of the data (including evaluating potential for bias), selection of most suitable algorithms, and ongoing refinement once they are deployed in the real world. Without appropriate frameworks, there are many opportunities to introduce bias into models.