Skip to main content

Clifford Chance

Clifford Chance

Insurance Insights

Future of AI Insurance Regulation: Fairness in AI Systems

The UK has announced that it will host the first global summit on the regulation of AI this autumn. There is a clear intention that the UK positions itself as a leader in AI regulation and international co-ordination. The AI meeting will complement the G7 initiative, known as the 'Hiroshima AI process', announced on 20 May 2023, which saw the G7 issue a joint statement to advance discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI.

The UK financial services sector is waiting for the regulators to respond to the discussion paper on AI and Machine Learning after they review and analyse the feedback. That paper was followed in March 2023 by a government white paper and consultation ("White Paper") setting out a principles-based regulatory framework for AI aimed at driving a unified approach to AI regulation across the UK.

The discussion paper clarified how existing sectoral legal requirements and guidance apply to the use of AI and examined whether risks associated with AI, such as potential bias and customer vulnerability (fairness) require additional regulatory intervention. It also discussed how financial services regulation fits in with other existing cross-sectoral legislation and regulation. The White Paper notes that fairness in AI systems is currently covered by a variety of regulatory requirements and best practice, including data protection, equality and general consumer protection rules, which presents a challenge to firms that are also subject to financial services regulation as there are no-cross cutting principles and limited regulatory coordination. It aims to address concerns regarding "conflicting and uncoordinated requirements" from different regulators that create compliance burdens on businesses and gaps in AI regulation.

Together, the papers serve as a useful reminder to firms about their non-financial services obligations under other laws such as the Equality Act 2010 ("EA") and UK data protection law ("UK GDPR").

Bias and Discrimination

It is well understood that the design and use of AI systems can lead to bias and discrimination. The unfair treatment of individuals will negatively affect consumer trust in an insurer and may lead to non-compliance with financial services regulation and also anti-discrimination legislation, such as the EA if data on protected characteristics has been used in decision making.

Historical information used to train AI systems might contain bias (inherent data bias), or data might be aggregated in a way that creates bias (incomplete, or unrepresentative data selection and erroneous correlation of data).

Human intervention during model training (data labelling) is currently inconsistent in quality and can create or exacerbate inherent data bias (human bias).

If biased outputs are then used to determine future decisions, it creates a bias feedback loop and bias can then result in discrimination.

Direct discrimination can occur when protected characteristics about an individual are inferred from other data. Indirect discrimination can arise when there is a correlation between a data point and a protected characteristic, such as postcode and race. AI systems can also help firms understand their customers better, enabling them to exploit behavioural biases and characteristics of vulnerability, enabling discriminatory practices.

The Equality Act 2010

The EA consolidates most equality law into one Act. It prohibits discriminatory conduct and creates duties in relation to nine 'protected characteristics', which include age, disability, race, sex and sexual orientation. Discrimination in breach of the EA is unlawful.

Insurers must not discriminate based on a protected characteristic in relation to providing insurance or in the terms of the insurance product itself.

There are various types of discrimination under the EA including direct and indirect discrimination – whether resulting from human or an automated decision-making system, or a combination of the two.

There are limited carve-outs relating to the prohibition on direct discrimination. Direct discrimination occurs if someone is treated less favourably than others because of a protected characteristic, for example, charging a higher insurance premium or refusing insurance cover due to the customer's sex.

In the context of insurance, indirect discrimination arises where a policy/criterion or practice ("PCP") applicable to every customer puts a group of people at a particular disadvantage because of their shared protected characteristic.

Therefore, where firms use AI systems in their business, they will need to ensure it does not result in unlawful discrimination based on protected characteristics, in particular, indirect lawful discrimination, which can be more difficult to identify when an algorithm is influencing outcomes. An example given by the Equality and Human Rights Commission of indirect discrimination (broadly) involves an algorithm identifying certain postcodes as having a higher business risk than other areas, but those postcodes happen to have a higher proportion of ethnic minority residents, so the fairness of decisions could be called into question and may (see e.g. possible defences) amount to indirect discrimination.

There are specific exemptions in the EA relating to certain types of insurance product. Reliance on an exemption might also require that the insurer is able to show that there is a difference in risk associated with the protected characteristic.

Defences

Direct age discrimination can be defended if it can be shown that the discriminatory treatment is a proportionate means of achieving a legitimate social policy aim (which would not include a company's own business aims such as cost reduction/profit augmentation).

Indirect discrimination can be defended if it can be shown that the PCP is a proportionate means of achieving a legitimate aim. In this case a company's legitimate aim need not have a social policy objective; it can relate purely to its own business. For example, if there is a practice of charging higher car insurance premiums for drivers who acquired their driving license overseas, this arguably puts non-English nationals at a substantial disadvantage (an individual who gets their driving license overseas is more likely to be a foreign national). However, if the insurance company has statistical evidence that holders of overseas driving licenses are more likely to be involved in a car crash, it would be a legitimate aim to reduce their financial exposure by charging a higher premium.

Guidance relating to the EA and AI was published in 2022 but it was specific to public bodies in England, Scotland and Wales. A checklist was also published, which may in part be useful to other businesses. We are not aware of plans to amend the EA in light of AI discussions at this stage.

Data Protection

UK GDPR addresses the processing of personal data and the protection of individual rights and freedoms, including the right to privacy and non-discrimination and includes specific rules on profiling and automated decision-making by AI systems. Simply removing any protected characteristics from the inputs used in an AI model might not be sufficient to comply with fairness requirements under the UK GDPR.

The UK government is seeking to reform the UK data protection regime, including amending the scenarios under which solely automated decisions, including profiling, may be carried out.

The ICO notes that demonstrating that an AI system is not unlawfully discriminatory under the EA is a "complex task" and that compliance with the EA will not guarantee compliance with data protection requirements. Firms using AI will therefore need compliance reviews on both the EA and the UK GDPR as well as general regulatory principles.

Looking ahead, the PRA, FCA and BoE must take the White Paper into account when deciding their approach to AI regulation. The White Paper sets out five overarching principles that existing regulators should apply in relation to AI regulation, which largely align with the aims of the UK GDPR and EA. Under the White Paper, all sector regulators will be expected to take a context-specific approach to what fairness in AI means, take other relevant law and regulation into account in interpreting fairness and consult with other regulators where there is overlap. The White Paper anticipates that joint guidance on fairness in AI systems will need to be developed by regulators as a priority. However, as you will no doubt be aware, the global conversation around AI regulation is moving very fast and commentators have warned that the White Paper, published just two months ago, is already out of date in its approach.

It remains to be seen what changes are made to financial services regulation to address the risks posed by AI, especially given the broader discussions on AI regulation that are taking place. With further consultations and likely changes in the pipeline, a firm's system of governance will need to be flexible to be fit for purpose. Currently, firms need to ensure compliance with separate obligations under different anti-discriminatory regimes, but if the White Paper achieves its aims and global standards are also introduced, navigating the legal and regulatory landscape governing the design and use of AI by insurers should be easier in the future.

For further information on AI, please see our site Generative AI: The big questions.

  • Share on Twitter
  • Share on LinkedIn
  • Share via email
Back to top