Skip to main content

Clifford Chance

Clifford Chance
Artificial intelligence<br />

Artificial intelligence

Talking Tech

Regulating robots: How will the UK regulate algorithmic decision making

The problem of algorithmic bias

Big Data Artificial Intelligence 29 August 2019

This article looks at the problem of algorithmic bias and the insights into this issue provided by the recent interim report published by the Centre for Data Ethics and Innovation (CDEI).

Artificial Intelligence and the UK Economy

Artificial Intelligence (AI) and algorithmic systems are increasingly being used by companies to inform and streamline their decision-making processes. The applications for algorithmic decision-making are endless and span both the public and private sectors – from assessing the likelihood of a defendant re-offending to predicting whether an individual will default on a mortgage, algorithms are constantly being used to classify individuals and predict behaviours.

While these tools have the potential to increase efficiencies and improve the reliability of decision-making (on some predictions AI could increase the UK's GDP by up to 10% by 2030), they also pose a challenge in that they can reinforce bias and produce unfair outcomes.

The UK Government's Approach to AI Regulation

With so much value at stake, in March 2019 the UK Government established the Centre for Data Ethics and Innovation (CDEI) as an independent advisory board tasked with connecting “policymakers, industry, civil society, and the public to develop the right governance regime for data-driven technologies”. The CDEI has a mandate to (i) make recommendations to the Government (which the Government is required to consider and address publicly) and (ii) provide advice to regulators and industries.

The CDEI has selected the issue of bias in algorithmic systems as one of two urgent issues to tackle in relation to data-driven technologies.

The CDEI's Review on Algorithmic Decision Making

The CDEI has recently published an interim report outlining the progress made to date on its review into bias in algorithmic decision-making which began in March 2019. The aim of the Review is to produce a final report with recommendations to the Government in March 2020.

To guide this process, the Review has looked to three main areas:

  • Data
  • Bias mitigation tools
  • Governance.
Data: Do organisations and regulators have access to the data they require to adequately identify and mitigate bias?

The usual approach taken by organisations to avoid bias in algorithmic decision-making is to programme their software to explicitly exclude data relating to protected characteristics – this seems the obvious and most simple "fix". Under the Equality Act 2010 in the UK, this means excluding data on age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation. The Review identifies two problems with this approach:

Indirect bias and the problem of proxies

While in theory making an algorithm blind to protected characteristics should prevent decisions based on them, in practice, these characteristics are often strongly correlated with other attributes which the algorithm may use as proxies to protected characteristics, leading to a discriminatory result. As datasets become increasingly complex, these proxies become harder to identify, representing a latent risk for companies.

Recent examples of indirect bias include:

  • A retailer’s algorithm was found to be much less likely to offer same day delivery to Black and Hispanic neighbourhoods, despite the fact that the algorithm only considered: (i) proximity to warehouses; (ii) number of subscribers in the area; and (iii) number of people willing to deliver to the area.
  • A social media company’s advertising platform was found to contain proxies that enabled real estate marketers to target renters and buyers by race using certain data points, including postcodes.

Conversely, blinding algorithms to protected characteristics may also lead to unfair results. For example, research into COMPAS, an algorithm designed to predict the likelihood of an individual re-offending, found that women tend to re-offend less than men in several jurisdictions. If gender data was excluded from COMPAS, the likely outcome would be disproportionately harsher sentences for women overall, making the algorithm less accurate, and arguably less fair.

Lack of accountability

By excluding protected and other sensitive characteristics from their data, organisations are also unwittingly limiting their ability to monitor their systems against bias (in particular, discrimination by proxy) and mitigate risks. There are a number of tools being developed to scan algorithmic systems to identify the existence of bias and its origin; sensitive data on protected characteristics is key to running these checks effectively.

Tools and techniques: What statistical and technical solutions are available now or will be required in future to identify and mitigate bias and which represent best practice?

This issue is already the focus of both the public and private sector globally.

In the private sector, Accenture has developed a commercially-marketed ‘Fairness Tool’ designed to find possible proxies for protected characteristics and remove relationships between sensitive parameters and proxies. The tool has a feature which shows the potential impact of removing a parameter altogether, allowing the user to decide whether to prioritise fairness or accuracy.

IBM, in turn, has developed an open-source toolkit called ‘AI Fairness 360’ and Google has recently launched its “what-if” tool with a similar purpose. Microsoft and Facebook are currently working on their own solutions.

While the Review recognises current private initiatives as a step in the right direction, it also highlights issues around

  • lack of clarity on the strengths and weaknesses of these tools.
  • limited research into how these tools work in practice.
  • compatibility of these tools with the GDPR. This hints that self-regulation may not be currently seen as the best practice of the future.

In the public sector, over 18 countries are currently working on establishing national AI strategies and ethical standards and international organisations such the European Union, the Organisation for Economic Co-operation and Development and the World Economic Forum have formed working groups and published papers to inform the global debate on AI ethics and regulation. Whilst this guidance is focused on high level principles, they demonstrate that there is a volume of "best practice" for the CDEI to choose from.

Governance: Who should be responsible for governing, auditing and assuring these algorithmic decision-making systems?

The Review also proposes the creation of guidance to inform the human value judgements made in the algorithm programming process and the way in which the results are used.

As these value judgements are likely to be context-specific, the Review proposes a sectoral approach to establishing guiding principles and auditing compliance. The Review suggests, for example, (i) creating industry standard metrics for analysing performance, (ii) creating standard datasets on which developers can run tests and (iii) introducing new actors, such as third party auditors, to verify claims made by organisations about their systems.

The financial services sector was chosen as one of four areas of focus. More specifically, the Review is looking at credit and insurance decisions for individual customers that are made using new technologies such as machine learning (ML) and non-traditionally sourced data (e.g. social media). 

Over the next year, the CDEI will prioritise the following work streams:

  • conducting structured interviews with key stakeholders to determine the main barriers in identifying and mitigating bias;
  • conducting a survey of bias identification tools available to establish best practice standards;
  • proposing further tools to address new developments in data-driven technology, such as ML; and
  • working with the industry to develop governance frameworks which reflect the public’s perception of fairness, based on research commissioned from the BIT.
Key Take-away Points
  • Focus on best practice standards – the Review proposes creating uniform guidance to provide clarity for organisations. This will likely take the form of industry-specific best practice standards, rather than an overarching AI framework.  Firms should consider preparing submissions for any consultation processes to ensure their concerns are addressed. This may also be an opportunity for firms with existing ethical standards to consider them in the new light of AI governance.
  • Data protection – As the CDEI conducts its survey of bias identification tools, incompatibilities with the current data protection framework are likely to surface. The House of Commons has previously recommended the CDEI to report to Government on areas where the UK’s data protection legislation “might need further refinement”. Whilst unclear what these changes will be, it is likely that data privacy notices will require refinement in the future.
  • Proactive Housekeeping – the Review identifies a tension between the need to create algorithms which are blind to protected characteristics, while also checking for bias against those same characteristics. Firms should consider reviewing both their own data retention policies and those of any AI service providers to enable effective AI oversight.
     

This article was written by Julia Kono, Trainee Lawyer