Skip to main content

Clifford Chance

Clifford Chance

Insurtech

Talking Tech

Safeguarding the use of AI in the Insurance sector

Insurtech Artificial Intelligence 6 July 2021

EIOPA has published a report on governance principles for ethical and trustworthy artificial intelligence (AI) in the insurance sector. The report sets out six governance principles that have been developed by EIOPA's Consultative Expert Group on Digital Ethics in Insurance. The principles cover proportionality, fairness and nondiscrimination, transparency and explainability, human oversight, data governance and recordkeeping and robustness and performance. These principles are accompanied by non-binding guidance for insurance firms on how to implement them in practice throughout the AI system's lifecycle. It is worth noting that environmental aspects of AI were not examined in the paper.

There have been a number of papers on ethics and AI published by a variety of stakeholders in recent years at national, European and international level. Just days before the EIOPA report, the Alan Turing Institute published a paper commissioned by the FCA, considering the challenges and benefits of AI in Financial Services. Given the volume of literature on AI and ethics, this report is a welcome addition for its focus on the insurance sector.

Currently, there is no universal consensus on ethical issues in AI, as, among other matters, much depends on the use case of the AI and the stakeholders involved (e.g. customer, regulator). The significant role of the insurance sector for businesses and individuals makes digital ethics in insurance an important aspect of regulatory oversight. The EIOPA report seeks to assist firms to reflect on the ethical issues associated with Big Data (BD) and AI and to organise their governance arrangements to safeguard the "sound use of AI".

The report notes that the adoption of BD analytics and AI in the European insurance sector has accelerated during the Covid-19 pandemic. There are multiple use cases for AI across the entire insurance value chain, from product design, through to claims management. The ability to micro-segment risks with increasing accuracy is a particular concern to regulators, for the implications it has on pricing, competition and access to cover for higher-risk customers. The insurance sector should take ethical concerns of BD and AI seriously in order to build and retain customer trust in products and services and avoid invasive regulation.

Although public interest and ethical standards and outcomes already underpin some important conduct of business regulatory principles (e.g. treating customers fairly), the report explains that the ethical use of data and digital technologies requires firms to look beyond regulation or legislation and expressly "take into consideration the provision of public good to society as part of the corporate social responsibility of firms". Firms should think about the impact of AI on different categories of customer, with special attention given to vulnerable customers, and determine whether or not the fair treatment of customers is put at risk by the applicable AI. This means that firms need to understand the design of the AI system, the sources and quality of data used in its processes, how the outcomes (decisions) are reached by the AI and how those decisions are implemented from the end user's point of view. By developing, documenting, implementing and reviewing appropriate governance systems and arrangements, insurers and intermediaries will establish a culture that ensures appropriate ethical considerations are considered when making decisions about AI and BD.

What does this mean for insurers and intermediaries?

The paper published by EIOPA represents the views of the members of EIOPA's expert group on digital ethics, but the views in the report do not necessarily represent the position of EIOPA, but EIOPA will use the findings to identify possible supervisory initiatives on digital ethics in insurance. Firms operating in Europe should use the information in the report and the non-binding guidance to help them develop risk-based, proportionate governance measures around the use of BD and AI in their business. Firms should also stay abreast of further work by EIOPA and other European-level initiatives relating to AI, which includes the European Commission's Ethics guidelines for Trustworthy AI (on which this report is based) and the proposal for a Regulation on artificial intelligence, published by the European Commission in April 2021. The PRA, FCA and the UK government are conducting their own activities on the safe adoption of AI in UK financial services, which includes the collaboration referred to earlier with The Alan Turing Institute and the activities of the Artificial Intelligence Public-Private Forum (AIPPF). The report published by the Alan Turing Institute in June 2021 includes guiding principles for the responsible adoption of AI, discussing fairness, transparency, human oversight, reliability and robustness and touching on 'uninsurability', among other things. At the second meeting of the AIPPF in February 2021, participants again discussed similar points to the EIOPA report, such as the need for human oversight of AI and adequate data governance standards, while noting that any risk management guidelines need to strike the right balance to provide reassurance without hampering innovation. So unsurprisingly, there are commonalities between the FCA and PRA's work and the report published by EIOPA. Given that the EIOPA report is sector specific and very detailed, UK insurers and intermediaries will benefit from reading the EIOPA paper and considering how the principles and guidance can be applied to their own business and governance arrangements.