Skip to main content

Clifford Chance

Clifford Chance
AI & Tech<br />

AI & Tech

Understanding the opportunities and challenges AI and tech present for business

Artificial Intelligence (AI) has huge transformative potential. It has a wide range of applications and offers huge benefits for businesses across sectors. Development, provision and use of AI also needs to navigate a number of risks and practical challenges – including regarding ethical use, data governance, IP and ownership, AI liability, cyber security and resilience, as well as the multi-layered and quickly evolving regulatory landscape.

Rapid advances in AI are having a significant impact on our clients' business models, growth strategies and day to day decision making. Our experts examine the big questions arising in AI deployment, and provide holistic, pragmatic legal advice to help our clients strategically manage risk as they explore AI opportunities.


Explore our insights

  • EU and US AI Regulatory Push Overlaps Across Global Business

    As jurisdictions race to understand and build appropriate regulatory and governance structures around artificial intelligence, the EU is frequently regarded as the frontrunner. However, the US is forging its own federated approach and isn’t necessarily behind. For organizations, is alignment with one or the other the best path to success? Ultimately, the choice won’t be either-or.


  • AI: the evolving legal landscape in Asia Pacific

    Asia Pacific (APAC) is a first mover in relation to AI regulation. Mainland China, in particular, has had city or regional regulations in place for some time and, more recently, enacted national AI regulations targeted at particular types of AI services or use.


  • The EU's AI Act: What do we know about the critical political deal

    The EU institutions have finally reached a political agreement on the EU's landmark Artificial Intelligence Act (AI Act), following the conclusion of their fif...

    Bookmark Download

  • The UK AI Safety Summit and Fringe - Seven things we learned

    The summit made significant progress in a number of areas including the publication of several research reports, the establishment of an international AI Safety Institute, and the signing of the Bletchley Declaration to drive a shared understanding of and approach to AI’s risks and opportunities.


  • First-tier Tribunal dismisses UK ICO's Clearview enforcement and monetary penalty notices on jurisdictional grounds

    The First-tier Tribunal held that the UK Information Commissioner's Office (ICO) did not have jurisdiction to issue its 18 May 2022 enforcement and monetary penalty notices, which alleged breaches of the UK and EU General Data Protection Regulations (together, GDPR), to Clearview AI Inc (Clearview).


  • Is ChatGPT a medical device?

    ChatGPT can provide medical information, but should it be classified a medical device? It's a question that Germany's Federal Institute for Drugs and Medical Devices (BfArM) was confronted with in an open letter from a Hamburg-based law firm which suggests that ChatGPT – a human-trained OpenAI-based chatbot that feeds its knowledge via freely accessible online sources – falls under the regulations applicable to medical devices in Germany and Europe.


  • What businesses need to know (for now) about the Biden Executive Order on AI

    Given this strong messaging, most corporate organizations are asking the question: "What does the EO mean for me?" Since the EO was announced, we have seen a lot of speculation around its implications. This article, however, looks to focus on the key practical elements of the EO, and what they mean for organizations in the U.S. and globally.


  • AI and the law - the Japanese perspective (with Michihiro Nishi)

    Tokyo Partner Michihiro Nishi speaks to David Mitchell from 39 Essex Chambers about the current thinking of generative Artificial Intelligence (AI) regulations in Japan and the future of legal disputes in Japan surrounding generative AI. This podcast has been republished with the permission of 39 Essex Chambers.


  • EU AI Act: Final negotiations can begin after European Parliament vote

    On 14 June 2023, the European Parliament voted to adopt its negotiating position on the proposed EU regulation on AI (AI Act). This is another major step towards the adoption of the proposed AI Act, undoubtedly one of the most important and anticipated pieces of legislation of the past few years.


  • Generative AI: The big questions

    As legal, technology and risk-management teams collaborate to support business-critical decisions, establish forward-looking frameworks and embed responsible AI in company strategy, being able to assess and advise on AI with a holistic understanding of the changing legal and policy landscape has never been more important.


  • China moves to further regulate artificial intelligence – What businesses should know

    On 11 April 2023, the Cyberspace Administration of China (CAC) published a consultation draft (the Administrative Measures for Generative Artificial Intelligen...

    Bookmark Download

  • ICYMI: The U.S. Government Launches AI Accountability Consultation

    On April 11, President Biden's administration quietly dropped a consultation that might lead to a slow revolution in the U.S.'s approach to AI regulation. The Department of Commerce's National Telecommunications and Information Administration (NTIA) published a request for comment (RFC) on how to achieve "trustworthy AI." The comments window is open until June 12, after which the NTIA will draft a report on AI policy development.


  • Insurtech Update: Aye-aye, Eye and AI

    The Aye-aye (Daubentonia madagascariensis) is a long-fingered lemur native to, you guessed it, Madagascar – but my voice assistant told me it was a Canadian rock band (called Eye Eye). It is highly likely that you have your own experiences of the (current) limitations of artificial intelligence and in similar superficial contexts. However, when complex technology is used in the insurance value chain, deficiencies such as algorithm bias can have discriminatory and other significant consequences for policyholders such as higher premiums, refusal of insurance cover, or rejection of claims.


  • The intellectual property and data protection implications of training generative AI

    The models underlying popular generative AI tools such as ChatGPT, Bard and DALL-E are being trained on vast amounts of data sourced from the internet.


  • Generative AI: here to harm or help?

    The speed at which Generative AI has gained traction in both businesses and our daily lives has re-ignited the debate around the potential for harm associated with highly automated systems deployed at scale.


  • What's wrong with this picture? AI-created images and content: copyright ownership and infringement

    Countless social media posts, images and articles have been generated using AI tools such as ChatGPT, Stable Diffusion and Dall-E. But who – if anyone – owns the copyright in the outputs, and do the tools themselves infringe copyright? Our IP & Tech//Digital teams take a deep dive into the issues and the recent infringement claims.


  • Financial services can light the way for context-specific AI regulation

    Given the rapid uptake in the use of artificial intelligence (AI) including machine learning (ML) technologies in the financial services sector, the sector and its regulators have enjoyed a head start in exploring and seeking to navigate the issues that arise. The sector is therefore well-positioned to demonstrate how a pro-innovation, context-specific and risk-based approach to regulating AI can succeed.


  • AI-Generated Music and Copyright

    When a track by artist "Ghostwriter" was uploaded and then promptly removed from streaming services in April, it was the latest example of one of 2023's most astonishing trends. The track 'heart on my sleeve' sounded like it was sung by two of the world's biggest stars, Drake and The Weeknd. In fact, it was actually someone who had used an AI tool to make his voice sound like theirs.


  • The EU AI Act: concerns and criticism

    In recent months, the AI Act and the Council's approach have given rise to several responses and concerns from different industries. We discuss some of the key concerns in more detail in this article.


  • The Italian Data Protection Authority halts ChatGPT's data processing operations

    On 30 March 2023, the Italian Data Protection Authority (the Garante) issued an interim emergency decision ordering OpenAI LLC ("OpenAI") to immediately stop the use of ChatGPT to process the personal data of data subjects located in Italy, pending further investigation.


  • Artificial Intelligence M&A and Fundraisings in 2023

    The arrival of 'generative AI' that can produce new content (text-based responses to queries, images, audio, video and even code) marks a new and exciting phase in the development of mainstream AI applications. The starting gun has been fired for a race to capture the market.


  • Trouble in the cockpit? Popular AI tool faces class action lawsuit

    A class action against GitHub Copilot has been filed alleging violations of the rights, including copyright, of authors that have created or contributed to codebases stored as public repositories on GitHub.


  • Developers can now let an AI assistant write code for them - but what are the IP implications?

    A year after its announcement, GitHub Copilot was launched last month to the public at large. The tool promises to "fundamentally change the nature of software development" by providing AI-based coding suggestions, saving developers time and effort. What are the intellectual property implications for those who build or buy software created using Copilot?


Show more

AI Principles

We are committed to maintain the highest professional standards around our use of AI, that protects our clients, adheres the law, and enhances the quality of our legal services to our clients.

View our AI Principles

More resources

  • Share on Twitter
  • Share on LinkedIn
  • Share via email
Back to top