Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

Artificial intelligence in financial services – what are the challenges?

Artificial Intelligence Banking & Finance Big Data 20 June 2022

Many of the regulatory initiatives relating to artificial intelligence (AI) have focussed on protection of the individual as consumer or subject of decision-making. While this is of course important, deploying AI in a heavily regulated environment such as financial services raises a series of questions that go beyond consumer protection and which must be thought about even when an AI deployment is not consumer facing. The Bank of England (Bank) and the Financial Conduct Authority (FCA) address some of these issues in a recent report and recommend that setting up an industry body for practitioners could be the next step towards developing voluntary codes of conduct and an auditing regime to help build trust in the use of AI.

In February, the Bank and the FCA published the final report of the Artificial Intelligence Public-Private Forum (AIPPF), which was set up to identify the practical challenges of the use of AI in financial services. The report makes some suggestions to ensure effective adoption of AI in financial services and says that regulators need to provide clarity on how existing regulations and policies apply to AI, but should avoid being overly prescriptive.

The report does not contain details on any new regulatory guidance, but it includes an annex on three specific use-cases of AI in financial services – credit, savings and investment advice, and anti-money laundering and fraud detection.

Data, model risk and governance

The discussion in the report revolves around three main areas: data; model risk; and governance.

Data represent the foundation of AI, models underpin the systems into which the data are fed, and then governance provides the guardrails to the whole process.

We summarise below the important points discussed under these three areas.

Data

The report notes that AI begins with data. In fact, data is the lifeblood of AI, without which it cannot function.

AI technology often uses unstructured "alternative data" which can derive from several sources, such as satellite images, biometrics, telematics, or third-party providers. The wide range of the data sources may raise an issue for data quality. Data quality is defined as the measures used to ascertain how suitable the data might be for certain uses. Examples of these measures include accuracy, completeness, consistency, representativeness, and timeliness of the relevant data.

The report discusses various options to solve the issue with data quality. The adaptation of the existing data quality metrics and standards may not be the best approach. There is currently a lack of industry consensus on data standards in general, or agreement on good practices. The rapid evolution of AI might also render any produced standards outdated within a short timeframe. Instead, firms can develop their own internal standards for assessing data quality. They can also create their own data templates. These templates could set out the history of the relevant data, where and how they were produced, and how they travel through the underlying firms.

The viability of this suggestion might be debatable. Tracking the history of data might be an extreme measure to suggest. It is unclear to what extent this option will be achievable for the firms that are affected, and whether they will be willing to dedicate the required resources. What type of restrictions could be the best to impose to safeguard the monitored data; specially if only certain business lines are trusted to handle them?

The report notes that the financial sector has some data standards and regulations, such as the Basel Committee on Banking Supervision's Standard number 239 (BCBS 239). It is a set of principles that aims to strengthen bank risk data aggregation and internal risk reporting. This standard applies to Global Systemically Important Banks.

Nevertheless, additional standards for certain elements of data governance, including data quality and documentation, retention, privacy, and security would still need to be developed. BCBS 239 principles also lack any rules on representativeness. The report suggests modifying BCBS 239 standard in order to address this deficiency.

The following are some of the practices suggested under this section of the report:

  • Implementing processes for tracking and measuring data flows within, into and out of firms.
  • Having a clear understanding and documentation of the provenance of data used by AI models.
  • Preparing guidance on the limitations and challenges of using alternative data.

Model risk

This part of the report focuses on the model risk of AI technology, its challenges and the best ways to address them.

In the context of financial services, according to the Federal Reserve's Guidance on Model Risk Management (SR11-7), a model is defined as a "quantitative method, system or approach that applies statistical, economic, financial or mathematical theories, techniques and assumptions to process input data into quantitative estimates".

Model risk is the potential loss that an institution could face due to a decision that is based on the output of internal models, if they have errors in their development, their use or implementation (Article 3(1)(11) of the EU Capital Requirements Directive).

Model risk may result in financial loss or damage to the reputation of financial institutions. The concept of model risk is normally broader than the model itself or the algorithms used. The model is not the only source of risks. Risks can also derive from the way that the model is used and its business application.

Due to the absence of any clear regulatory direction, and by way of creating a template, some banks have started using the Guidance principles and the Prudential Regulation Authority (PRA) Supervisory Statement 3/18, "Model risk management principles for stress testing".

The report highlights how the AI models used in financial services are of a complex nature and how small firms may adopt more complex types of AI models, as they are not restricted by legacy systems. They might also be incentivised to use more complex AI models as they do not have large volumes of data to handle.

When firms become more proficient and competent in using AI, they are likely to start using more complex AI models for highly important use cases. It might be questionable as to what would be the best course of action if these AI models do not perform as planned or their outputs deteriorate through the lifecycle of their operation. Therefore, before putting any model into operation, firms need to have backup options and remediation plans. Stakeholders should also be informed in advance of the likelihood of failure, the responsible team, and the best course of action to take in case of any issue. This will enable firms to respond promptly, and reduce the severity of the harm that could be endured by customers.

Explainability of AI models is another crucial aspect that should be considered carefully in relation to regulated activities. Specifically, that the functionality of AI models is not sufficiently clear or explainable, i.e. the "black-box problem". It is at present not clear what level of explainability is required for regulated activities.

Explainability depends on the type of activity, the recipient of the explanation and the nature of the models. Regulating explainable AI models is completely different when it comes to AI models that cannot be explained or interpreted; the regulatory framework will only apply to their inputs and outputs.

It is worth noting that the issue with explainability of AI in financial services could be further complicated when the technology is supplied by another provider or a chain of providers who themselves lack the visibility of how such system operates or functions.

The severity of the impact of the use of AI would depend on the end-users. For example, AI may result in offering over-tailored financial services and products for customers resulting, furthermore, in excluding some customers from accessing certain services. For example, they could be denied access to credit or insurance cover because the AI deems them to be of high risk. Firms could also suffer from the deployment of ineffective AI solutions, such as financial loss (due to a poor credit algorithm) or regulatory sanctions for being irresponsible toward their customers.

The following points have been proposed as good practices and follow up actions:

  • The Bank and the FCA have to consider producing model risk regulations similar to the Guidance.
  • Produce guidelines setting out the required level of explainability for certain used cases.
  • If explainability is not possible, there is a need for certain safeguards that would prevent any unwanted outcomes. Such safeguards need to be easily explained.
  • Implement an appraisal process for explainability approaches.
  • Facilitate regular assessments of AI application performance.
  • Engage in clear explanations of AI application risks and mitigation.

Governance

The last section of the report focuses on governance and its importance for the safe adoption of AI in financial services. Governance helps to insure that accountability is enforced and creates a set of rules and policies that govern the use of AI.

The ability of AI to reach autonomous decisions may have an impact on the adoption of effective governance in financial services. Further, AI systems may operate alongside various existing governance functions, making it difficult to have a clear picture of the lines of accountability.

The existing governance frameworks in financial services, such as data governance, model risk management (MRM) and operational risk management, could be the starting point for AI models and systems. Firms can also produce a set of AI risk principles and map them to the existing risk frameworks.

Combining the existing governance frameworks into one overarching AI governance framework might be another good alternative. For example, MRM could be combined with data management to form a single framework to govern the use of AI.

The most effective proposal made in the report might be the creation of a cross-functional body "council", chaired by personnel recognised by the Senior Managers and Certification Regime (SM&CR). This council could cover compliance, data, MRM and other areas so as to ensure that all potential risks, within the relevant firm, are considered and none is ignored or misunderstood.

Notwithstanding which of these proposals is adopted, the governance frameworks should take into account the underlying risks. For example, using AI for consumer credit cases will require much more thorough due diligence than chatbot scenarios.

Reasonable steps, such as a concept of SM, CR and the FCA Code of Conduct (COCON 4.2 - Specific guidance on senior manager conduct rules) might also be applicable to the use of AI. These might include: (i) having an ethics framework and training in place; (ii) maintaining documentation and ensuring auditability; (iii) embedding appropriate risk management and control frameworks; (iv) a culture of responsibility; and (v) reporting and accountability between AI teams.

The report further discusses the significance of transparency and communication in the context of AI governance. It identifies the following two audiences:

  • Group 1 – model developers, compliance teams and regulators firms have to be able to provide accurate information about any decisions made, and an assurance that any recommendations provided are reliable.
  • Group 2 – consumers who are affected by model decisions – they have to be informed when a model is being used for decision automation, what data were used and what features led to the decision. The best solution on transparency for consumers is a work that is still in progress: creating standardised approaches that are likely to result in a better outcome for the affected consumers.

The following actions are set out as good practices for AI governance:

  • Strengthening contact between data science teams and risks teams at an early stage of the model development cycle.
  • Establishing a central committee to oversee the development and use of AI.
  • Delivering training and understanding of AI (including on responsibility and accountability).

What's next?

The report represents a step forward toward recognising the impact of the use of AI in financial services while flagging up the barriers that could hinder an effective adoption.

We understand that the Bank and the FCA will publish a Discussion Paper on AI during 2022 to build on the work of the AIPPF. This Paper is expected to provide much-needed clarity on the regulatory framework and its applicability to AI, and will give an opportunity to stakeholders to share their views.

The regulatory aspects discussed in the report derive from sources which are technology- neutral; these sources were drafted without any consideration of AI nor the implications of its use in financial services. Therefore, these sources may not always be suitable to accommodate the use of this novel technology.

We would expect that the increased use of AI will expedite the need to consider and amend the FCA Handbook and PRA Rulebook so as to provide clarity to regulated firms that use AI. In the near future, we will also need to think about the procurement of AI and how financial institutions can protect themselves contractually to satisfy evolving regulatory standards and ensure that AI is fit for purpose. This would necessitate a wider and regular engagement with stakeholders and those who are responsible for the design and deployment of AI in financial services.

In the meantime, the awaited Discussion Paper and the proposed voluntary codes of conduct might help, provided that the latter will be approved by the regulators. So, watch this space for further updates on the progress of this fascinating regulatory area.