Skip to main content

https://rtau.blog.gov.uk/2020/11/27/overview-of-our-review-into-bias-in-algorithmic-decision-making/

An overview of the CDEI's review into bias in algorithmic decision-making

Posted by: , Posted on: - Categories: Algorithms, Bias, Decision-making

Today we have published the final report of our review into bias in algorithmic decision-making.

This report draws together the findings and recommendations from a broad range of work. We have focused on the use of algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making.

We are very grateful to the wide range of organisations and individuals who spent time contributing to this work, across government, industry, academia and civil society. This includes responses to our call for evidence, interviews as part of our research into financial services and recruitment, a landscape summary of relevant academic research, as well as focused pieces of research on data analytics in policing and technical approaches to bias mitigation.

What did we find?

Growth in algorithmic decision-making over the last few years has been accompanied by significant concerns about bias; that the use of algorithms can cause a systematic skew in decision-making that results in unfair outcomes. There is clear evidence that algorithmic bias can occur, whether through entrenching previous human biases or introducing new ones.

However, the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are good reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care. Though a report on bias inevitably considers risks, there is also an opportunity here. Data gives us a powerful weapon to see where bias is occurring and measure whether our efforts to combat it are effective; if an organisation has hard data about differences in how it treats people, it can build insight into what is driving those differences, and seek to address them.

To date, the design and deployment of algorithmic tools has not been good enough to achieve this consistently. There are numerous examples worldwide of the introduction of algorithms persisting or amplifying historical biases, or introducing new ones. We must and can do better. Making fair and unbiased decisions is not only good for the individuals involved, but it is good for business and society. Successful and sustainable innovation is dependent on building and maintaining public trust. 

What do the public think?

Our review was informed by public engagement to gain a deeper understanding of attitudes towards algorithmic decision-making. 

In partnership with the Behavioural Insights Team we ran an experiment to look at attitudes towards the fair use of data in financial services.

Working with Deltapoll, we polled a representative sample of the UK population to investigate awareness of, and attitudes towards, the use of data and algorithms in decision-making. We found that a small majority of respondents were aware of the use of algorithms to support decision-making (around 6 out of 10). Of those respondents who were aware of the use of algorithms, respondents were most aware of their use in financial services (more than 5 in 10), in contrast to local government (around 3 in 10). The results suggest that the public are more concerned that the outcome of decision-making is fair, rather than whether algorithms are used to inform these judgements. There is strong public support for data - including age (net agreement of +59%), ethnicity (+59%) and sex (+39%) - to be used for monitoring tackling algorithmic bias in recruitment.

Our recommendations

We set out the steps that we think government, regulators and industry should take to tackle the risks of algorithmic bias. 

Key recommendations within the report include:

  • Organisations should be actively using data to identify and mitigate bias. They should make sure that they understand the capabilities and limitations of algorithmic tools that they are using, and carefully consider how they will ensure that individuals are fairly treated.
  • Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have an impact on significant decisions affecting individuals.
  • Government should issue guidance that clarifies the application of the Equality Act to algorithmic decision-making. This should include guidance on the collection of protected characteristics data to measure bias and the lawfulness of bias mitigation techniques. 

Although many of the recommendations in this report focus on actions for government and regulators, which was the core remit we set out initially to look at, there is much that individual organisations can and should be doing now to address this issue. Organisations remain accountable for their own decisions whether they have been made by an algorithm or a team of humans. Senior decision-makers in organisations need to engage with understanding the trade-offs inherent in introducing an algorithm. They should expect and demand sufficient explainability of how an algorithm works so that they can make informed decisions on how to balance risks and opportunities as they deploy it into a decision-making process.

Organisations often find it challenging to build the skills and capacity to understand bias, or to determine the most appropriate means of addressing it in a data-driven world. A cohort of people is needed with the skills to navigate between the analytical techniques that expose bias and the ethical and legal considerations that inform best responses. Some organisations may be able to create this internally, others will want to be able to call on external experts to advise them. As part of our openly commissioned research into bias mitigation techniques, we worked with a partner to build a web application that seeks to explain the complex trade-offs between different approaches; there is more to be done in this area to build understanding of the options available.

Next steps

Though this is a final report of this review, it is by no means the end of the story. We have identified a range of work that needs to be done by government, regulators and industry to ensure that the UK can benefit from the opportunities of algorithms, while avoiding the risks of bias.

The CDEI is already working to support some of the follow-up steps, and we have further work planned for the coming months. This includes a programme of work on AI assurance, which will identify what is needed to develop a strong AI accountability ecosystem in the UK, as well as working with the Government Digital Service as they seek to respond to our recommendation with a pilot approach to algorithmic transparency within the UK public sector.

We are also ready to support the government and regulators in the work needed to clarify the application of equality law to algorithmic decision-making, including guidance on collection and use of protected characteristics data, and bias mitigation approaches. We will draw on the draft technical standards work that was produced in the course of this review, along with other inputs, to help industry bodies, sector regulators and government departments in defining norms for bias detection and mitigation.

Please do get in touch in the comments below, or by e-mail to bias@cdei.gov.uk, if you would like to contribute to any of this work.

Sharing and comments

Share this page

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.