Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

Five Lessons We Learned from the U.S. Senate Hearing on AI Oversight

Artificial Intelligence 17 May 2023

On May 16th, U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing on Oversight on Artificial Intelligence. Rather than focus on the testimony of the three witnesses, Sam Altman, Chief Executive Officer of OpenAI, Gary Marcus, Professor Emeritus at New York University, and Christina Montgomery, Vice President and Chief Privacy and Trust Officer at IBM, I want to talk about what we learned about potential regulation of AI by the U.S. federal government. Senator Blumenthal started the hearing by laying out the principles that should guide AI regulation: transparency, accountability and limits on use… that was the first lesson. Here are four others:

A Separate AI Agency or Special Commission May be in Our Future

Several of the senators debated whether regulating artificial intelligence needs a separate agency or commission to focus on the unique challenges of this technology. Senator Booker likened generative AI to the introduction of the car to New York City. It solved the problem of manure all over the streets but created its own hazards and challenges that spawned agencies such as the National Highway Traffic Safety Commission. Two out of the three witnesses were also in favor of a separate agency. Although one senator warned that an agency without the requisite technical knowledge would be ineffective and another noted that, in the U.S., agencies often become captive to the very industries they are supposed to be regulating, it is entirely possible that the U.S. creates an agency or special commission as part of AI regulation.

Large Creators of AI are in the Cross-hairs or at least Under the Microscope

A senator warned that having the most cutting edge generative AI technology in the hands of those who had the most funds could create a technocracy overlaying an oligarchy. The subcommittee also asked whether it was better or worse for the core of generative AI to be held by few. One senator suggested creating a law that specifically allows individuals to sue AI creators if they are harmed by that technology. A debate of whether that capability already existed in view of Section 230 and existing laws ensued. Both the senators and witnesses voiced a concern that whatever regulation comes next shouldn't crowd-out or unfairly burden start-ups and smaller AI creators. On balance, it seems that regulation will likely focus on those who have the most power, money, and expertise to develop AI.

The Mode of Regulation May be Multi-faceted

As evidenced by the EU AI Act being revised before it even goes into effect, legislators are still trying to figure out the way to effectively regulate AI without completely hampering the great opportunities it presents. One question from a senator laid out a fundamental question for AI regulation: Should the regulations prohibit the capabilities of AI models or prohibit what can be done with them? Based on the conversation among the senators and witnesses, it seems likely the U.S. regulations will attempt to tackle both.

Child Safety, Disinformation, and Privacy are Top of Mind

Several senators acknowledged that generative AI is already in use by children and creators should not seek to escape regulation by stating that their tools are only for use by adults. There was also a fair amount of discussion about the proliferation of disinformation that had already been and will continue to be enabled by AI. The subcommittee also pressed the witnesses about what they thought should be in a national privacy law, especially in the context of AI. While a distinction emerged between use of personal data from the open web and personal data input into a tool by a user, rights of users to opt-out or have their data deleted seemed to be a priority. Even if these topics don't make it into the first piece of legislation from the U.S. Congress, they will likely be in the conversation. (Bonus note: The subcommittee mentioned that the disinformation act is likely to be reintroduced, highlighting this concern.)

I'll close this post with the same comment Senator Blumenthal closed the hearing with: There were several good questions asked, but no real answers provided. Those "answers" will likely not come before the Senate feels that it needs to introduce AI regulations, which means we should all be prepared for some trial and error in the next phase of U.S. AI regulation. Here we go…