Skip to main content

Clifford Chance

Clifford Chance

Briefings

Bringing human smarts to artificial intelligence

11 July 2019

The lure of artificial intelligence (AI) for underwriters is clear: a powerful algorithm that has sucked up data can spit out sophisticated results that enable precision setting of premiums. Suddenly the risk appears almost removed from a risk-based business. But a report just released from UKFinance is a reminder of another risk: bias.

The report, Artificial Intelligence in Financial Services (June 2019), warns: “We need to ensure AI systems make recommendations without unnecessary bias and do not discriminate on race, gender, religion or other similar factors.” As an employment lawyer, I have spent years guiding human decision-makers to be and appear free from bias, unconscious or otherwise. It now seems machines are not impervious to Equality Act breaches either.

In terms of data fed in, this could include past decisions on the setting of premiums, with future decisions to be extrapolated from past outcomes. The issue is the previous decisions are not necessarily “clean” – the computers absorb the biased and potentially discriminatory decisions of their human predecessors. Unless proper scrutiny is applied, insurers could find they have been inadvertently penalising particular protected groups.

In the application of data to results, statistical data may show an insured person having a particular characteristic or set of characteristics is more likely to lead to an insured event occurring. From an actuarial viewpoint, this may be valuable. However, discrimination law does not like assumptions that make protected characteristics (for example, race or sex) a shorthand for determining prejudicial outcomes. Case law shows when such data is used to justify differential treatment, legal equal treatment principles may be breached.

In the UK, direct discrimination occurs when a person is treated less favourably because of a protected characteristic. With a rogue algorithm that has machine-learnt insufficiently audited data, insurers could unconsciously be applying a machine’s conscious bias.

Indirect discrimination occurs when a neutral provision, criterion or practice in fact places individuals with a protected characteristic at a disadvantage. It is easy to imagine a non-neutral outcome when data on communities or prosperity is drawn together in an algorithm. Indirect discrimination can be justified if it is a proportionate way of achieving a legitimate aim. Insurers should assess they meet such legal hurdles in the relevant jurisdiction in advance – not when the regulators come knocking.

AI is a valuable tool in setting premiums. Insurtech is booming. But unless carefully managed, it brings the legal risk of being successfully sued for discrimination breaches, with resulting regulatory censure and perhaps a negative treating customers fairly finding – not to mention the reputational cost.

Scrutiny, safeguarding and control can be built into contracts with providers. As UKFinance makes clear in its report, governance, oversight and explainability will also be crucial. In the nuanced world of bias and discrimination, demonstrating a decision has been reached on permissible grounds will be as important for AI-based decision-makers as it has been for human ones.

This article first appeared in Insurance Day