Skip to main content

Clifford Chance

Clifford Chance
Fintech<br />

Fintech

Talking Tech

IOSCO publishes AI guidance for market intermediaries and asset managers

Report details six measures for IOSCO members to consider

12 October 2021

The Board of the International Organization of Securities Commissions (IOSCO), the international body bringing together the world's securities regulators, has in September 2021 published its final report containing guidance on the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers.

This guidance, which follows the publication of their consultation report on the subject in June 2020, is intended to assist IOSCO members in supervising market intermediaries and asset managers using AI and ML technology.

Use of AI and ML by market intermediaries and asset managers

Increased data availability and computing power has led to growing use of AI and ML by asset managers and financial intermediaries. AI and ML can create significant efficiencies and benefits for firms and investors, including by increasing execution speed and reducing the cost of investment services. Currently this technology is being used in areas such as:

  • advisory and support services (e.g. robo-advisors and automated investment advisors);
  • risk management (e.g. early warnings of potential customer defaults, market liquidity risk assessments, staff e-mail monitoring, and tracking portfolio manager behaviour);
  • client identification and monitoring (e.g. KYC checks, fraud detection and money laundering monitoring);
  • selection of trading algorithms and predicting market movement (e.g. selection of trading strategy and/or a broker depending on factors such as market situation and trading objectives, or using predictive data analytics to identify potential flash crash events); and
  • optimising portfolio management (e.g. review of diverse data sets to generate trade ideas, forecast asset prices or price investment products).

Currently AI and ML are largely used to support human decision making – for example suggesting investing recommendations which are then acted upon by a human decision-maker.  However, IOSCO's discussions with buy-side firms indicated that use of this technology is typically in its nascent stages, which could mean the ratio of human vs machine decision making shifts in the future.

The use of AI and ML within financial markets has become a focus for regulators across the world due to the increase this may bring in certain risks that have the potential to impact the efficiency of financial markets and harm investors and other market participants. Among these, IOSCO identifies operational resilience concerns, and inadequate oversight and control, which could impact the quality or fairness of the operations performed.

IOSCO report – six measures

IOSCO's report details six measures, which are not binding, for IOSCO members to consider adopting to address conduct risks associated with the development, testing and deployment of AI and ML.

  • Governance: Regulators should consider requiring firms to have designated senior management responsible for oversight of AI and ML technology, including its development, testing, deployment, monitoring and controls. This should include documented internal governance frameworks which have clear lines of accountability. There should also be a designated appropriate senior individual or individuals with the relevant skill set and knowledge to sign off on initial deployment and substantial updates.
  • Development, testing and ongoing monitoring: Regulators should require firms to adequately monitor and test algorithms to validate the results of a given AI and ML technique on a continuous basis. It should be tested prior to deployment in a segregated environment to ensure it behaves as expected in both stressed and unstressed market conditions and to ensure that it operates such that it complies with regulatory obligations.
  • Knowledge and skills: Regulators should require firms to have the adequate skills, expertise and experience to ensure controls over the AI and ML that the firm uses are developed, tested, deployed, monitored and overseen. Compliance and risk management functions should be able to understand and challenge algorithms and conduct due diligence on third party providers.
  • Operational resilience: Regulators should require firms to understand their reliance on third party providers and manage their relationship with them, including conducting oversight and monitoring their performance. There should be a clear service level agreement and contract in place which ensures adequate accountability and clarifies the scope of outsourced functions and the responsibility of the service provider. There should be clear performance indicators in the agreements as well as rights and remedies for poor performance.
  • Transparency and disclosure: Regulators should consider the level of disclosure that is required to be provided by firms who are using AI and ML. This includes regulators requiring firms to disclose meaningful information to customers and clients around the use of AI and ML that impact client outcomes and considering what type of information they may require from firms using AI and ML to ensure they have the appropriate oversight.
  • Systems and controls: Regulators should consider requiring firms to have appropriate controls in place to ensure that the data underpinning the performance of AI and ML is of sufficient quality to prevent biases and is sufficiently broad for a well-founded application of AI and ML.

Helpfully, the report recognises that not all measures will be equally relevant to all uses of AI and ML, leaving room for proportionality considerations.  The activity that is being undertaken, the complexity of the activity, risk profiles, the degree of autonomy of the AI and ML applications, and the potential impact that the technology has on client outcomes and market integrity will all be important factors in judging what is proportionate.

Another useful aspect of the report are the annexes.  Annex 1 has a helpful overview of what regulators around the world are doing in relation to AI and ML, covering: Abu Dhabi (FSRA); UK (FCA); Canada (IIROC); Germany (BaFin); China (CSRC);  France (AMF); Singapore (MAS); Netherlands (AFM); US (SEC, FINRA); Luxembourg (CSSF).  Annex 2 covers activity by supranational bodies such as the Financial Stability Board, the International Monetary Fund (IMF) and the Organisation for Economic Co-operation and Development (OECD).

Key takeaways for buy-side firms

The IOSCO report serves as a good indicator of the standards that will develop and the areas on which regulators will focus their supervisory efforts in relation to AI and ML.  Those monitoring developments in relation to AI and ML will note that some of the themes emerging in the IOSCO report, such as supply chain management and transparency, are present in proposed AI legislation for certain types of AI, such as the EU's proposed legislation on artificial intelligence (for more information see here).

Key takeaways from the IOSCO report for buy-side firms include:

  • AI and ML governance procedures and controls may require uplift: In designing or updating oversight and risk management controls, firms should consider how to include key features highlighted by IOSCO, such as: (1) stress testing in a segregated environment; (2) consideration of data suitability and potential bias; (3) ongoing review throughout the deployment and use lifecycle; and (4) clear and appropriate accountability lines through the firm up to senior management.
  • Recruitment, talent pipeline management and staff training are critical: Requirements for governance and oversight through the lifecycle of AI and ML development and use, together with growing use of these technologies, means the number of staff requiring technical expertise is likely to increase. This will include a regulatory expectation that people in senior positions will have sufficient technical understanding to challenge and steer use of AI and ML appropriately.
  • Due diligence, robust contracting and ongoing review will be key in managing AI and ML providers: Truly understanding any AI and ML related software and services will be an important aspect of supplier due diligence, whether agreements are with third parties or intra-group. This will include considering the potential impact on business operations, on clients and on the market if such services or software were to malfunction or become unavailable, and ensuring that any malfunction would be identified.  Of course, this may become challenging to achieve, particularly as use of AI and ML becomes more sophisticated – for example where unsupervised machine learning is used. In addition, there may be reluctance on the part of suppliers to reveal too much information about what might well be their greatest asset.  Nevertheless, whatever the underlying technology, negotiation of clear and robust outsourcing agreements and implementation of appropriate ongoing oversight mechanisms will be key in helping to manage operational resilience and third-party risk.
  • M&A and investment due diligence: Similarly, understanding use of AI and ML, and the accompanying governance framework, should feature within due diligence reviews when considering joint ventures, mergers, other collaborations or investment in other companies.  This is particularly important when considering use of AI and machine learning in anti-money laundering and Know Your Customer requirements, which have become subject to increased regulatory scrutiny.
  • Consider how AI and ML usage could be appropriately disclosed to clients: The report's focus on transparency and disclosure echoes a drive towards client empowerment and process visibility in other areas – including requirements relating to automated decision making under data protection laws – and sits within overarching requirements under various regulations for the protection of customer interests.  Firms will be giving consideration to what level of disclosure is necessary and appropriate in the circumstances – in particular given the particular role that AI and ML is playing, the level of human intervention involved, and the type and sophistication of the client – whilst also seeking to protect any competitive advantage that such technology may be granting.