Skip to main content

Clifford Chance

Clifford Chance

Responsible AI in Practice

A Global Report

Responsible AI in Practice: Public Expectations of Approaches to Developing and Deploying AI

Public awareness of AI is increasing and, with it, questions on how AI will be used and the impact it will have on our lives. To help identify the pressures faced by policymakers and demands from stakeholders, our report, in partnership with Milltown Partners – Responsible AI in Practice – analyses the results of focus groups of policy-informed individuals conducted in the US, the UK and Germany. Groups were asked about issues such as bias, consent, copyright, transparency,  content moderation and AI governance. We found that people's attitudes towards AI are developed and nuanced. They are optimistic about the potential of AI, but there is work to do to demonstrate that it will be used responsibly and safely.

Six key insights from our research:

People are aware that AI could benefit society, but could also create risks:

  • Participants recognised that AI can have a positive impact on areas such as medicine, science and productivity, but they were concerned that it may have a negative impact on jobs and equality. Companies should consider how they communicate the steps they are taking to address AI risks.

Public perceptions of AI are heavily influenced by perceptions of high-profile technology companies more generally:

  • Participants' views of AI were influenced by associations they made with topical issues affecting a number of leading technology companies, such as safeguards for content moderation and online safety. Companies can take this into account when highlighting aspects of their products, which may reassure the public.

Public attitudes about AI do not fall into opposing viewpoints:

  • On issues such as the use of data, people's views often divided into opposing camps such as privacy versus national security. However, participants' views on AI were not so clearly divided and so companies have an opportunity to educate stakeholders about AI and to shape public debate.

People want information and choices about how AI is used in their lives:

  • Participants wanted companies to provide information about how AI will affect them to enable them to make informed choices. This was particularly the case regarding the use of AI with a significant immediate impact, such as use of AI to assess eligibility for loans and other financial products. This suggests that companies can consider the different levels of information their users and stakeholders expect for different AI use cases.  

Growing public awareness of the impact of AI means companies face increased scrutiny:

  • Participants were aware of the risks around bias, transparency and accuracy, but expressed less concern about the existential risks posed by AI. Companies seeking to earn public trust can focus their AI strategies on the impact that AI will have on people's lives.

Participants expect regulators and companies to collaborate to ensure that AI technologies are developed and used responsibly:

  • Participants felt that companies have an important role to play in realising the benefits of AI safely,  wanting companies and governments to work together to develop effective guardrails, regulation and standards swiftly and effectively. This suggests there is a window of opportunity to work with policymakers at regional, national and international levels.

To read the findings in full, download our report.

  • Share on Twitter
  • Share on LinkedIn
  • Share via email
Back to top