Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

ICYMI: The U.S. Government Launches AI Accountability Consultation

Artificial Intelligence United States 28 April 2023

On April 11, President Biden's administration quietly dropped a consultation that might lead to a slow revolution in the U.S.'s approach to AI regulation. The Department of Commerce's National Telecommunications and Information Administration (NTIA) published a request for comment (RFC) on how to achieve "trustworthy AI." The comments window is open until June 12, after which the NTIA will draft a report on AI policy development.

The NTIA's AI Accountability RFC

Although this RFC is part of an existing administration framework (i.e. the White House Office of Science and Technology Policy Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework), the document reads with a sense of urgency, which likely comes from the recent explosion of Large Language Model (LLM) systems such as OpenAI's ChatGPT and other transformer-based systems such as Midjourney or Stable Diffusion.

The technical focus of the RFC is AI assurance. This "assurance" is a set of tools such as audits, testing and public disclosure which helps to "assure" that entities are creating systems that are "legal, effective, ethical, safe, and otherwise trustworthy." The NTIA wants to hear from all stakeholders.

The RFC name-checks a range of legislative initiatives, including the European Union (EU) Digital Services Act requiring audits of very large online platforms' systems, the draft EU AI Act requiring

conformity assessments of certain high-risk AI tools before deployment, New York City Law 144 requiring bias audits, and bills introduced in Congress covering algorithmic impact assessment or audit provisions. The RFC also refers to various private sector assurance initiatives such as Microsoft's Responsible AI Standard.

State regulators are considering their own compulsory AI accountability mechanisms; for example, Colorado's proposed regulation covering algorithms in life insurance (proposed February 1, 2023 here).

Questions posed by the RFC

The RFC focuses on the purpose of AI accountability mechanisms should be incorporated into existing mechanisms which cover human rights, privacy protection, security, and diversity, equity, inclusion, and access.

The RFC grapples with the complexity of the lifecycle of AI development, from selecting training data, to developing foundation models, to deployment. It highlights the developing tension between the "product safety" approach of, for example, the EU in its draft AI Act, to focusing more on "deployment context," which seems to be the focus in, for example, the U.S. and the UK.

What's next?

The NTIA will put its proposals and the responses into a report. That report might make some radical proposals, including mandated impact assessments, requiring independent third-party audits, creating a system of certification, and "making test data available for use."

There is likely to be a wave of feedback. This consultation represents a key step in the development of the U.S.'s approach to AI regulation, as well as forming part of a global process which may see as much of a revolution in the way AI is regulated as there will be in what it can do.

Organizations which may be subject to future AI audits, certifications and assessments or any of the regulatory proposals which may flow from this process should consider responding to the RFC. For more information on this or any tech policy issues, please contact me or any member of our Global Tech Group.