Skip to main content

Clifford Chance

Clifford Chance
Banking & finance<br />

Banking & finance

Talking Tech

How to Manage AI-Specific Cybersecurity Risks in the Financial Services Sector: A Guide for Financial Institutions

Artificial Intelligence Cyber Security Banking & Finance 20 May 2024

Artificial intelligence (AI) is not a new phenomenon in the financial services sector. For decades, financial institutions (FIs) have used AI to automate processes, enhance customer service, detect and deter fraud, and optimize investment strategies. However, as AI becomes more advanced and ubiquitous, it also poses new and complex cybersecurity risks that require careful management and mitigation.  

In response to the Biden Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, the U.S. Department of the Treasury recently released a report titled "Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector", which provides a comprehensive overview of AI challenges and opportunities and offers a set of recommendations for FIs and regulators.  While the Treasury Report focuses on the cybersecurity aspects of AI, it also notes that its recommendations can have broader applicability to other AI-related risks and challenges, such as ethical, privacy, and operational considerations. Moreover, the Treasury Report emphasizes that its guidance is not limited to FIs that use AI extensively or exclusively, but rather applies to any FI that employs AI in any capacity or interacts with third parties that do so.

The Treasury Report is a significant step toward acknowledging and addressing the unique cybersecurity challenges posed by AI in the financial services sector. It signals that regulators are paying close attention to the potential risks and benefits of AI and are seeking to foster a collaborative and responsible approach to AI adoption and governance. While the Treasury Report does not propose any specific AI regulations or standards, it suggests that FIs should expect increased scrutiny and oversight from regulators and other stakeholders on their AI-related cybersecurity practices and capabilities. Therefore, FIs should proactively review and enhance their AI-related cybersecurity policies, procedures, and controls, as well as their communication and coordination with regulators and third parties.

This article summarizes the main broadly applicable themes and insights from the Treasury Report and provides practical guidance for those in the financial services sector who are responsible for overseeing and advising on AI-related matters.

Key Themes and Recommendations from the Treasury Report

The Treasury Report identifies four key themes that underpin the management of AI-specific risks in the financial services sector: education, collaboration, people, and data. For each theme, the Treasury Report provides a set of recommendations.

Education

The Treasury Report emphasizes the need for a common AI lexicon among financial services sector stakeholders, as well as the need for continuous learning and tracking of AI developments. The Treasury Report recommends that FIs and regulators develop and implement AI education and training programs, establish AI governance frameworks and policies, and leverage existing resources and best practices from industry associations, standards-setting bodies, and academic institutions.

There is no universally accepted definition of AI, and organizations may use different terminology to describe their AI activities and applications. For example, some may define "AI" broadly, encompassing machine learning, natural language processing, computer vision, and other techniques, while others may use more specific or technical terms, referring to deep learning, neural networks, or reinforcement learning.

The lack of a common definition may pose challenges for communication and collaboration within an institution as well as among FIs, regulators, and other stakeholders, and may hinder establishment of consistent and appropriate standards and guidelines for AI. FIs are advised to research and arrive at a consistent definition or set of definitions within their organization.  Some factors to consider when defining AI in a specific context include:

  • The purpose and scope of the arrangement, and the intended audience and users of the AI system or service.
  • The alignment and compatibility of AI definitions with applicable laws, regulations, and industry practices, and the potential gaps or conflicts.
  • The level of detail required and desirable to describe an AI system or service, and the potential benefits and risks associated with its use, as reflected in risk assessments and contractual provisions such as reps, warranties and indemnities.
  • The flexibility of definitions, descriptions, and processes to accommodate the evolving and dynamic nature of AI technologies and applications.

Collaboration

The Treasury Report recognizes the importance of effective collaboration within and across FIs, as well as with regulators and other external parties. It recommends that FIs and regulators foster a culture of trust and transparency, share information and experiences, engage in dialogue and feedback, and coordinate on common standards and guidelines for AI.

FIs may wish to collaborate with:

  • US regulators, particularly financial regulators.
  • Foreign regulators if an FI is subject to such regulators' oversight.
  • Think tanks, such as the Brookings Institution, the Center for Data Innovation, and the Center for Responsible AI.
  • Academic institutions and bodies, such as the Computer Science and Artificial Intelligence Laboratory, the MIT Initiative on the Digital Economy, the Stanford Artificial Intelligence Laboratory, and the Stanford Human-Centered AI Institute.
  • Industry groups, such as the American Bankers Association, the Bank Policy Institute, and the Chamber of Digital Commerce.

In each case, FIs are advised to think strategically about ways to engage, which may include proactive outreach, monitoring, responses to specific requests for comment, etc.

People

The Treasury Report acknowledges the critical role of human talent and expertise in the development, deployment, and oversight of AI. It recommends that FIs and regulators attract, retain, and develop AI talent, create diverse and inclusive teams, and ensure appropriate accountability and oversight of AI activities.

The demand for AI talent and expertise is increasing, while the supply of AI talent and expertise is limited.  To address this challenge, FIs are advised to:

  • Create a clear and compelling value proposition and vision for AI in the financial services sector and communicate the benefits and opportunities to AI professionals.
  • Provide competitive compensation and benefits, as well as career development and progression opportunities, for AI professionals.
  • Invest in training and education programs, both internally and externally, to enhance the AI skills and competencies of current and future employees.
  • Collaborate with academic institutions, research organizations, and other industry partners to create and support AI talent pipelines and facilitate exchange of knowledge and best practices.

Data

The Treasury Report highlights the challenges and opportunities of data management and quality in the financial services sector and how these relate to AI. It recommends that FIs evolve thoughtful data governance and risk management frameworks that consider nuances relevant for AI.

Quality data is the foundation and fuel of AI, as it enables the development, validation, and improvement of AI systems and services. However, data processing poses significant challenges, which can be magnified with the use of AI.  As such, FIs are advised to revisit and pay close attention to their risks and controls around:

  • Data security.
  • Data privacy.
  • Data integrity.
  • Data governance.
  • Data access and sharing.

Internal Risk Management and Third-Party Oversight

The Treasury Report recognizes that AI poses new and complex risks to the financial services sector and calls for FIs to enhance their internal risk management and third-party oversight capabilities. Specifically, the Treasury Report recommends that FIs should:

  • Implement sound IT governance and cybersecurity practices to ensure resiliency and security of their AI systems.
  • Adopt rigorous and transparent AI development and testing methodologies.
  • Identify and mitigate potential AI legal risks and liabilities, including with respect to intellectual property, data protection, privacy, consumer protection, contractual obligations, and regulatory compliance, and ensure that AI applications are consistent with applicable laws and regulations.
  • Align AI applications with organizational values, ethical principles, and social responsibilities, respecting the rights and interests of customers, employees, partners, and other stakeholders.
  • Monitor and report on AI performance and impacts, and adhere to relevant codes of conduct, standards, and best practices in the financial services sector and the broader AI ecosystem.

The Treasury Report also emphasizes the importance of effective third-party oversight and management for FIs that rely on AI-related vendors, suppliers, consultants, or service providers (Vendors). The Treasury Report suggests that FIs:

  • Review their Vendor due diligence practices and consider asking each Vendor to disclose e.g, the scope of AI use in the Vendor's products and services, material changes in the Vendor's AI use, and any incorporated third-party AI models.  
  • Generally, consider and account for:
    • The level of transparency and explainability of the Vendor's AI systems and how they communicate the rationale, limitations, and uncertainties of AI outputs to the FI and its customers.
    • The data quality and security practices of the Vendor's AI systems and how they ensure the accuracy, completeness, timeliness, and confidentiality of the data they collect, process, and store.
    • The ethical and social implications of the Vendor's AI systems and how they address the potential risks and harms of bias, discrimination, unfairness, and other negative impacts.
    • The legal and regulatory compliance of the Vendor's AI systems and how they adhere to the relevant laws and regulations.
    • The governance and accountability mechanisms of the Vendor's AI systems and how they monitor, audit, evaluate, and report on the performance, outcomes, and incidents of their AI systems and how they handle the feedback, complaints, and disputes.

The Treasury Report refers to the FS-ISAC's recent Generative AI Vendor Evaluations & Qualitative Risk Assessment Guide and the Generative AI Vendor Evaluation & Qualitative Risk Assessment Tool as reference points for these assessments and analyses.

  • Put in place clear and enforceable agreements that adequately allocate roles, responsibilities, and risks with respect to AI.
  • Monitor and evaluate Vendor performance and compliance on a regular basis, including by maintaining adequate documentation and audit trails of Vendors' AI-related activities and ensuring access to relevant data, models, and algorithms.

Practical Guidance

Some practical tips to help FIs navigate the complex and dynamic AI landscape include the following:

  • Assess organizational risk tolerance and goals in the face of AI as an enabler and a tool as well as a new and evolving technology that represents a range of risks.
  • Embed AI-specific risk management within existing enterprise risk management programs.  The three lines of defense approach, which FIs generally follow, can provide a solid framework where AI is addressed alongside other technologies across banking products, processes, and systems.  Alternatively, AI can sit alongside other developments addressed by an organization's existing principle-based approach where senior leadership determines organizational goals, policies, and risk tolerance.
  • Develop and implement a specific AI risk management framework to help identify and manage AI risks in the specific FI's context.  This would enable the FI to map AI risks against existing controls and create incremental relevant mitigation plans.
  • Integrate AI plans into enterprise risk management functions, such as cybersecurity risk, third-party risk management, and technology risk.  Any AI approach should be cross-functional and include representation from legal, compliance, marketing, product, and other functions. An FI may wish to designate a specific individual with overall AI risk responsibility or create a committee or a center of excellence. Relatedly, an FI should consider empowering and holding accountable the Chief Data Officer or a similar individual to map the data supply chain and understand implications for AI risk management as well as future planning.
  • Ensure that existing cybersecurity practices map to and secure AI systems and data, including training and test data. Revisit cybersecurity practices, including against the NIST Cybersecurity Framework to identify opportunities.  Consider implementation of incremental multi-factor authentication mechanisms to enhance cybersecurity and fraud protections.
  • Effectively manage vendors and monitor vendor relationships to account for AI risks and opportunities.
  • Review and update existing contracts, policies, and procedures to ensure that they reflect the current state of the art and the best practices in AI.
  • Provide and participate in education and training programs related to AI and cybersecurity. Stay informed and updated on the latest developments and trends in AI.