Understanding the opportunities and challenges AI and tech present for business
Artificial Intelligence (AI) has huge transformative potential. It has a wide range of applications and offers huge benefits for businesses across sectors. Development, provision and use of AI also needs to navigate a number of risks and practical challenges – including regarding ethical use, data governance, IP and ownership, AI liability, cyber security and resilience, as well as the multi-layered and quickly evolving regulatory landscape.
Rapid advances in AI are having a significant impact on our clients' business models, growth strategies and day to day decision making. Our experts examine the big questions arising in AI deployment, and provide holistic, pragmatic legal advice to help our clients strategically manage risk as they explore AI opportunities.
Latest reports
Global developments in AI regulation
In this extract from a recent Clifford Chance webinar, we explore how different jurisdictions including the US, EU, UK, China and Singapore are taking different approaches to AI regulation but ultimately want the same thing – responsible AI.
In this extract from a recent Clifford Chance webinar, we explore data transfers and localisation, cybersecurity and the latest regulatory developments and enforcement trends in APAC.
Asia Pacific is a first mover in relation to AI regulation. Mainland China, in particular, has had city or regional regulations in place for some time and, more recently, enacted national AI regulations targeted at particular types of AI services or use. Across APAC, however, approaches vary significantly in relation to regulating AI. We explore AI-related legislative developments across APAC.
Public awareness of AI a is increasing and with it questions how AI will be used and the impact it will have on our lives. To help identify the pressures faced by policymakers and demands from stakeholders, our report, in partnership with Milltown Partners – Responsible AI in Practice analyses the results of focus groups of policy-informed individuals conducted in the U.S., the UK and Germany.
As legal, technology and risk-management teams collaborate to support business-critical decisions, establish forward-looking frameworks and embed responsible AI in company strategy, being able to assess and advise on AI with a holistic understanding of the changing legal and policy landscape has never been more important. Here our experts examine some of the big questions to address when exploring generative AI opportunities.
Devika Kornbacher, Co-Chair of the Global Tech Group and Office Managing Partner in Houston speaks to BBC News on OpenAI's ChatGPT service after the company has been accused of imitating Hollywood actress Scarlet Johansson's voice for their AI chatbot.
The European Union has approved the landmark EU AI Act for the regulation of Artificial Intelligence, the hotly debated and first of its kind legislation in the world which will be applicable in the EU and beyond. Dessi Savova, Partner and Head of the Clifford Chance Continental Europe Tech Group, speaks to BBC News on this very important chapter in the global AI regulatory history and the finalisation of the EU AI Act's adoption process after more than three years debating on what should be the right regulation.
Phillip Souta, Global Director of Tech Policy speaks to BBC News about the latest developments in UK AI Regulation. He discusses how the UK's focus on prioritising safety and positioning itself as a sophisticated market place and regulatory environment. In terms of research development and investment, in 2022 the UK had more investment in the AI space than France, Germany and the rest of the European Union combined.
Dessislava Savova, Partner and Head of the Continental Europe Tech Group, speaks to BBC News World Business Report to share her insights just before those negotiations got underway before the European Parliament and the Council of the EU reached an agreement on the outstanding points on the EU's Regulation on artificial intelligence on 9 December.
The final phase of developing the EU Artificial Intelligence Act is happening. Dessislava Savova, Partner and Head of Continental Europe Tech Group speaks to BBC News about the European Parliament's vote.
As the EU AI Act moves ahead in Parliament, Dessislava Savova, Partner and Head of Continental Europe Tech Group speaks to BBC News on EU and global regulation.
The AI Act will unquestionably have a global impact with regulators and businesses around the world.
The EU’s Artificial Intelligence Act (EU AI Act) will have a significant impact on employers and HR professionals who use, or plan to use, AI systems in their ...
AI is growing rapidly, but how do you control and regulate it?
In this extract from a recent Clifford Chance webinar, we explore how different jurisdictions i...
The Hong Kong Office of the Privacy Commissioner for Personal Data (PCPD) published its "Artificial Intelligence: Model Personal Data Protection Framework" (Model Framework). The Framework is a valuable guide for organisations in Hong Kong that seek to procure, implement and use AI systems that handles and processes personal data.
The recent approval of the EU Artificial Intelligence Act (EU AI Act) marks a significant shift in the legal landscape surrounding the use of artificial intell...
As jurisdictions race to understand and build appropriate regulatory and governance structures around artificial intelligence, the EU is frequently regarded as the frontrunner. However, the US is forging its own federated approach and isn’t necessarily behind. For organizations, is alignment with one or the other the best path to success? Ultimately, the choice won’t be either-or.
Asia Pacific (APAC) is a first mover in relation to AI regulation. Mainland China, in particular, has had city or regional regulations in place for some time and, more recently, enacted national AI regulations targeted at particular types of AI services or use.
The EU institutions have finally reached a political agreement on the EU's landmark Artificial Intelligence Act (AI Act), following the conclusion of their fif...
The summit made significant progress in a number of areas including the publication of several research reports, the establishment of an international AI Safety Institute, and the signing of the Bletchley Declaration to drive a shared understanding of and approach to AI’s risks and opportunities.
The First-tier Tribunal held that the UK Information Commissioner's Office (ICO) did not have jurisdiction to issue its 18 May 2022 enforcement and monetary penalty notices, which alleged breaches of the UK and EU General Data Protection Regulations (together, GDPR), to Clearview AI Inc (Clearview).
ChatGPT can provide medical information, but should it be classified a medical device? It's a question that Germany's Federal Institute for Drugs and Medical Devices (BfArM) was confronted with in an open letter from a Hamburg-based law firm which suggests that ChatGPT – a human-trained OpenAI-based chatbot that feeds its knowledge via freely accessible online sources – falls under the regulations applicable to medical devices in Germany and Europe.
Given this strong messaging, most corporate organizations are asking the question: "What does the EO mean for me?" Since the EO was announced, we have seen a lot of speculation around its implications. This article, however, looks to focus on the key practical elements of the EO, and what they mean for organizations in the U.S. and globally.
Tokyo Partner Michihiro Nishi speaks to David Mitchell from 39 Essex Chambers about the current thinking of generative Artificial Intelligence (AI) regulations in Japan and the future of legal disputes in Japan surrounding generative AI.
This podcast has been republished with the permission of 39 Essex Chambers.
On 14 June 2023, the European Parliament voted to adopt its negotiating position on the proposed EU regulation on AI (AI Act). This is another major step towards the adoption of the proposed AI Act, undoubtedly one of the most important and anticipated pieces of legislation of the past few years.
As legal, technology and risk-management teams collaborate to support business-critical decisions, establish forward-looking frameworks and embed responsible AI in company strategy, being able to assess and advise on AI with a holistic understanding of the changing legal and policy landscape has never been more important.
On 11 April 2023, the Cyberspace Administration of China (CAC) published a consultation draft (the Administrative Measures for Generative Artificial Intelligen...
On April 11, President Biden's administration quietly dropped a consultation that might lead to a slow revolution in the U.S.'s approach to AI regulation. The Department of Commerce's National Telecommunications and Information Administration (NTIA) published a request for comment (RFC) on how to achieve "trustworthy AI." The comments window is open until June 12, after which the NTIA will draft a report on AI policy development.
The Aye-aye (Daubentonia madagascariensis) is a long-fingered lemur native to, you guessed it, Madagascar – but my voice assistant told me it was a Canadian rock band (called Eye Eye).
It is highly likely that you have your own experiences of the (current) limitations of artificial intelligence and in similar superficial contexts. However, when complex technology is used in the insurance value chain, deficiencies such as algorithm bias can have discriminatory and other significant consequences for policyholders such as higher premiums, refusal of insurance cover, or rejection of claims.
The models underlying popular generative AI tools such as ChatGPT, Bard and DALL-E are being trained on vast amounts of data sourced from the internet.
The speed at which Generative AI has gained traction in both businesses and our daily lives has re-ignited the debate around the potential for harm associated with highly automated systems deployed at scale.
Countless social media posts, images and articles have been generated using AI tools such as ChatGPT, Stable Diffusion and Dall-E. But who – if anyone – owns the copyright in the outputs, and do the tools themselves infringe copyright? Our IP & Tech//Digital teams take a deep dive into the issues and the recent infringement claims.
Given the rapid uptake in the use of artificial intelligence (AI) including machine learning (ML) technologies in the financial services sector, the sector and its regulators have enjoyed a head start in exploring and seeking to navigate the issues that arise. The sector is therefore well-positioned to demonstrate how a pro-innovation, context-specific and risk-based approach to regulating AI can succeed.
When a track by artist "Ghostwriter" was uploaded and then promptly removed from streaming services in April, it was the latest example of one of 2023's most astonishing trends. The track 'heart on my sleeve' sounded like it was sung by two of the world's biggest stars, Drake and The Weeknd. In fact, it was actually someone who had used an AI tool to make his voice sound like theirs.
In recent months, the AI Act and the Council's approach have given rise to several responses and concerns from different industries. We discuss some of the key concerns in more detail in this article.
On 30 March 2023, the Italian Data Protection Authority (the Garante) issued an interim emergency decision ordering OpenAI LLC ("OpenAI") to immediately stop the use of ChatGPT to process the personal data of data subjects located in Italy, pending further investigation.
The arrival of 'generative AI' that can produce new content (text-based responses to queries, images, audio, video and even code) marks a new and exciting phase in the development of mainstream AI applications. The starting gun has been fired for a race to capture the market.
A class action against GitHub Copilot has been filed alleging violations of the rights, including copyright, of authors that have created or contributed to codebases stored as public repositories on GitHub.
A year after its announcement, GitHub Copilot was launched last month to the public at large. The tool promises to "fundamentally change the nature of software development" by providing AI-based coding suggestions, saving developers time and effort. What are the intellectual property implications for those who build or buy software created using Copilot?
The EU's Markets in Crypto-assets Regulation (MiCAR) introduces an EU regulatory framework for the issuance of, intermediating and dealing in crypto-assets. Pa...
The new EU Markets in Crypto-assets Regulation (MiCAR) creates an EU regulatory framework for the issuance of, intermediating and dealing in, cryptoassets. Par...
Data regulation is rapidly developing across the Asia Pacific (APAC) region. Businesses need to understand how these regulations will affect their strategies a...
Cybersecurity is not just a problem for the IT department. It is an enterprise-wide issue.
In this extract from a recent Clifford Chance webinar, we explore g...
From medical microrobots to patient digital twins to generative AI for the creation of synthetic health data, we take a look at the next wave of tech innovations creating opportunities and challenges for healthcare and life sciences in 2024 and beyond.
This year will see a surge of digital regulation and enforcement. AI is transforming both business strategies and legal landscapes, privacy and cyber laws are ...
The data centre industry continues to develop at an increasing pace due to technological advancements and market trends. Significant global market growth, risi...
While 2023 saw ongoing uncertainty and high-profile financial sector failures, it also brought regulatory progress around the world, including for landmark regulations on digital assets and AI. In 2024, we will see the next stage of pioneering regulation as blueprints and best practices begin to emerge, alongside compliance challenges.
The EU tech regulatory landscape is going through a major transformation, with an unprecedented number of developments in recent years. This not only relates t...
Competition authorities around the world have increased their focus on fintech as part of a broader rise in intervention in financial services and tech markets...
In recent years, China has developed its legal framework regulating data and personal information (PI) with the promulgation of the PRC Cybersecurity Law in 20...
The data centre industry is poised for growth in 2023 due to increased demand from businesses. However, factors such as higher costs, a slowing economy, new capacity challenges and increased regulation due to sustainability concerns about energy and water consumption, will impact growth. The pandemic has fueled the growth of the global data centre market, projected to reach 235 billion euros by 2026 with a projected Compound Annual Growth Rate of 4.5%. Companies must consider the latest tech trends when selecting a data centre partner or colocation provider.
Last year was a tough one for fintech with the collapse of a number of high-profile industry players, as well as wider economic pressures including the war in Ukraine, supply chain challenges and high inflation.
Evolving technologies, increased digital connectivity, cyber risk, geopolitical tensions, climate change, supply chain disruption and changing markets are shaping government policies, regulation and legal risk in relation to use of data and technology.
Space tech is not a future concept – it's here and now and offers many investment opportunities. The space industry is growing rapidly and has expanded by 70% ...
Approximately US$2 billion worth of 'land' has changed hands so far this year without a single human ever setting foot on it and with the knowledge that no hum...
The regulatory landscape for digital services is being fundamentally redefined in the European Union (EU), with a boom in the number of legislative initiatives...
Companies across a wide range of sectors are exploring the commercial potential of the metaverse but will need to navigate a complex patchwork of existing and ...
The metaverse has been described as the future of the internet – a digital world where we will live, work and play and which has attracted enormous investment ...
The Digital Markets Act (DMA) ushers in a new era for the digital sector in the EU, as compliance will require some of the most influential digital companies t...
As a growing number of tech companies invest heavily in the metaverse – which allows users to live, work and play in alternative virtual worlds – we explore th...
Jane Chen, Stella Cramer, Jonathan Kewley, Devika Kornbacher, Dessislava Savova and Phillip Souta looked at the global development of regulatory frameworks for AI, including emerging trends and diverging approaches being taken in different jurisdictions.
AI, Fairness, Bias and Law
Herbert Swaniker, Senior Associate in the London Tech//Digital team moderated the session alongside Dr Mahlet Zimeta (Journalist and Consultant on AI and Technology Policy), Sarah Chander (Senior Policy Advisor, European Digital Rights) and Sandra Wachter (Professor of Technology and Regulation at the University of Oxford).
This session explores how those biases arise, how they can be mitigated and what lawmakers should consider in ensuring fair AI.
Views from the UK’s Digital Regulation Cooperation Forum
Jonathan Kewley, Partner and Co-Head of the Global Tech Group moderated the session. Leaders from each of the UK regulators who form the Digital Regulation Cooperation Forum - the CMA, Ofcom, the ICO and the FCA - discuss their cooperative initiatives in digital regulation.
AI Principles
We are committed to maintain the highest professional standards around our use of AI, that protects our clients, adheres the law, and enhances the quality of our legal services to our clients.