Skip to main content

Clifford Chance

Clifford Chance

Intellectual Property

Talking Tech

AI-Generated Music and Copyright

Intellectual Property Artificial Intelligence Media & Entertainment United Kingdom 27 April 2023

When a track by artist "Ghostwriter" was uploaded and then promptly removed from streaming services in April, it was the latest example of one of 2023's most astonishing trends. The track 'heart on my sleeve' sounded like it was sung by two of the world's biggest stars, Drake and The Weeknd. In fact, it was actually someone who had used an AI tool to make his voice sound like theirs. 

AI has become a hot topic in the music industry in recent months, with new examples each week of astonishing AI-generated music, and concerns voiced about the "widespread and lasting harm" of such tools to music creators and rightsholders.

In the UK, the proliferation of such tools comes at a time of increased scrutiny of the role of copyright and the remuneration of music creators and rightsholders, following the DCMS's inquiry into the economics of music streaming.

In this article we explore AI-generated music and copyright, in particular:

  • the different ways in which AI can be used to 'create' music
  • whether these creations attract copyright protection (and therefore whether the creator can stop others from copying the AI work)
  • if AI-generated music is protected, who owns the outputs
  • the risks that the use of such AI tools could constitute copyright infringement
  • the potential implications of this technology for the music industry.

AI-Generated Music

AI tools can be used to 'create' music in different ways:

  • AI-generated compositions, e.g. scores and chord charts
  • AI-generated recordings
  • AI-manipulated recordings, e.g. imitative vocal synthesisers, to create "deep-fake" voices
  • to mix and master tracks (e.g. LANDR)
  • to write lyrics (e.g. on ChatGPT).  

Can AI-generated music itself be protected by copyright?

If copyright does not subsist in a musical work, it can be freely copied by anyone without risk of copyright infringement liability.

Under English copyright law, works generated by AI, can theoretically be protected as works "generated by computer in circumstances such that there is no human author of the work" (s. 178, Copyright, Designs and Patents Act 1988 (CDPA)).

However, it is first important to separate the copyright in the songs/compositions themselves (often referred to, along with the lyrics, as the "publishing rights") from copyright in the sound recording (often referred to as "phonographic rights" or "master rights").  

For sound recordings, these are protected regardless of whether they are created/generated by AI or a human author because there is no requirement of "originality". However, for a song or lyrics to be protected, they must be "original".

"Originality"

There is uncertainty in English law both about the correct test for "originality" to be applied and whether the test requires a human author.

The English law originality test was "skill, judgment and labour" until CJEU case law brought in a separate test, that of the "author's own intellectual creation". This was originally introduced in EU Directives on software  and databases  but has now been applied more broadly to encompass copyright works beyond software and databases (see for example the Painer  and Cofemel judgments).

The "author's own intellectual creation" is generally regarded as requiring a higher standard of originality than the English case law standard. Many commentators consider that AI-created works that do not have a human author cannot meet this higher standard. However, there is uncertainty over how broadly the EU test applies in the UK, or whether it will continue to apply in the UK post-Brexit, and whether it contradicts the CDPA, which seems to provide protection for non-human authored works.

The UK Intellectual Property Office (IPO) ran a public consultation on Artificial Intelligence and IP from October 2021 to January 2022. The Government response decided not to make any changes to the existing law on the subsistence or ownership of copyright in computer-generated works – leaving open this uncertainty.

On 15 March 2023, an entirely separate report of Sir Patrick Vallance on the Pro-innovation Regulation of Technologies Review proposed that the UK should "utilise existing protections of copyright and IP law on the output of AI". However, the Government's response did not explicitly mention providing copyright protection to AI-generated works but, instead, focused on infringement issues (see below).

This is not a uniquely UK or European problem. Unlike most of the rest of the world, copyright can be registered in the US, meaning that the US Copyright Office has had to deal with this question directly. The USCO has consistently refused to register copyright works without a human author, and has now issued guidance on works containing material generated by AI. 

AI-Assisted Creations

AI composing tool AIVA, which allows users to create music in pre-defined styles such as "Modern Cinematic", describe their tool as a "Creative Assistant for Creative People". This is a theme in such tools, which are often used to create an initial draft which still requires some human input to make it sound convincing (at least for the moment…). For example, when Huawei's AI tool 'wrote' the missing 3rd and 4th movements of Schubert's Unfinished Symphony, they worked with Emmy award-winning composer Lucas Cantor to "draw out the good ideas from the AI and fill in the gaps where necessary". Are such AI-assisted creations protected?

By analogy, in Hyperion Records v Sawkins [2005] EWCA Civ 565, a composer and musicologist created new versions of a public-domain work, including corrections and additions to make it playable. The Court of Appeal found that, even though the starting point was a public domain score, the composer's revisions made it an "original" work.

Under English law, to the extent that AI is used as a tool to generate ideas and themes which are adapted by a musician into a final work, the overall piece is likely to be protected by copyright (although any exclusively AI-developed themes, for example, may not themselves be protected).

If an AI-generated song is protected, who owns the outputs?

Provided that there is some copyright protection, under English law, the author of a computer-generated work is deemed to be the person "by whom the arrangements necessary for the creation of the work are undertaken" (s. 9(3), CDPA). With a prompt-based AI tool, it is unclear whether the user inputting text prompts or the owner of the AI tool itself would be the author.

Although the only judgment on s. 9(3) CDPA to date held that image frames generated in the course of playing a video game belonged to game's publisher rather than the game's player, the courts may see the position of the user of an AI tool as fundamentally different to that of the player of a video game. This may depend on the amount of information put in by the user.

Any remaining doubt about ownership as between the user and the creator of the tool can be resolved by contract. For example, the user terms of AIVA, only assign copyright to the user if they pay for certain premium plans, otherwise copyright is owned by AIVA.

Can a voice be protected by copyright or any form of IP?

One of the latest innovations in AI technology is deepfake vocal synthesisers which make a singer's voice sound like a famous artist or even tools which create a wholly synthetic voice.

It is unlikely that under English and EU law a manner or style of singing is protectable by copyright whether generated through an AI synthesiser or through vocal imitation. Whilst there has been an expansion of the subject matter of copyright protection at the EU level, an overarching principle is that one must be able to identify, clearly and precisely, the protected subject matter (Cofemel). It difficult to see how a voice/style of singing could attract protection in this way. Under UK law, it is not clear which of the fixed categories of copyright 'works' would protect a voice.

There may be an argument that imitative vocal synthesisers could be used in certain ways which could constitute passing-off. For example, Californian law recognises that when a "distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort" (Midler v Ford Motor Co.). Interestingly, Rick Astley has recently brought a similar case for the imitation of his voice in an interpolation of his track 'Never Gonna Give You Up' by Yung Gravy (Astley v Harui PKA Yung Gravy). In the pleadings, Astley's legal team have sought to expand the Midler judgment to apply to use of imitation for any commercial purposes, rather than solely in relation to false endorsement.

Whilst Yung Gravy had used an Astley impersonator rather than an AI tool, if Astley's case were to be successful, this may provide an avenue under Californian law for an action to be brought for vocal imitations made by AI as well.

The UK action for passing off has been used to prevent false celebrity endorsements (Irvine v TalkSport), and it may be sufficiently flexible to take action against deepfake versions of artists' voices in many cases.

The existence and scope of personality rights varies significantly between jurisdictions, so attempts to assert personality rights internationally may have varying degrees of success.

Copyright Infringement

Copyright infringement occurs when the copy is 'substantially similar' to the original (US law) or there is copying of a 'whole of substantial part' of a particular work (UK law).  

If an AI tool copied specific melodies or lyrics, for example, that would likely constitute copyright infringement. However, it may be difficult to identify such specific examples of copying, with well-built AI tools generally designed to copy the general sound and feel of music, in part to avoid allegations of copyright infringement. Despite the potential implications in the US of the 'Blurred Lines' case Williams v. Gaye, 885 F.3d 1150 (9th Cir. 2018), infringement of the feel of a song is unlikely to be sufficient. A new composition that is composed with an AI tool or sung using an AI-generated voice may not incorporate any "substantially similar" element or "substantial part" of any previous work that is actually protected by copyright.

However, whilst rightsholders may struggle to bring actions for copying by reference solely to the songs generated by the AI system (the outputs), they may be able to bring actions for copying of the training data itself (the inputs). 

Infringement of Inputs: Text and Data Mining

One of the ways in which AI can learn to imitate musicians' voices or compositional styles is by being trained on large amounts of data, known as "text and data mining" (TDM). 

TDM process and which territories' laws apply:

  • If a given TDM process involves making and storing permanent copies of complete recordings, if no licence is in place, and if the TDM was carried out in a jurisdiction without a broad statutory TDM exception, the entity that carried out that process may be liable for copyright infringement.
  • Conversely, the position is less clear where only temporary and transient copies of songs are made, and where only abstracted parameters are stored and used by the AI model, which are not themselves copyright works.

The scope of statutory exceptions from liability from copyright infringement for TDM varies markedly between territories. UK law currently permits "text and data analysis" only for non-commercial research (s. 29A, CDPA). However, in June 2022, the UK IPO announced a proposal to allow TDM for any purpose at all. The proposed exception would have allowed commercial AI tools to be trained on all music without requiring a licence or compensating rightsholders, making the UK one of the most permissive places for AI research in the world. This received significant objection from the music industry, which described it as "music laundering". The UK Government then announced in February 2023 that these proposals were to be scrapped. By comparison, the EU Digital Single Markets Directive provides rightsholders with the ability to opt their works out of the TDM exception, whereas Singapore has recently enacted a very broad TDM exception.

Certainty can be provided by obtaining licences from the rightsholders for the express purpose of TDM, noting that for music catalogues there may be several rightsholders for each track (e.g. writers, publishers, performers, record labels). (As an example, see Hipgnosis's deal with Reactional Music.) In addition to commercial licences freely negotiated between users and rightsholders, state-approved licensing schemes may emerge.

On 15 March 2023 the report of Sir Patrick Vallance on the Pro-innovation Regulation of Technologies Review stated that the UK "should enable mining of available data, text, and images (the input)". The Government's response stated that, to provide clarity, the UK Intellectual Property Office will produce a code of practice by summer 2023, and that "an AI firm which commits to the code of practice can expect to be able to have a reasonable licence offered by a rights holder in return" (see recommendation 2 of the response). As such, the Government seems to be encouraging an industry-led approach to establish an official licensing framework, with legislation only to be brought in if this cannot be agreed.

How this self-regulation approach will work with cases already being brought in the US and the UK (see here and here) and complex jurisdictional issues without a global framework, is highly uncertain. 

Commentary

Does it actually matter whether AI music is protected by copyright? The general answer from both AI companies and rightsholders is, yes. Many AI developers want their investment in the creative industries to be recognised by way of copyright protection, and for rightsholders (as exemplified by the newly formed Human Artistry Campaign) there is a concern that copyright protection for AI-generated music could undercut their catalogues and undermine human creativity. AI-generated music could provide an effectively unlimited supply of music in an industry which the major labels see as already over-saturated with the proliferation of so-called "fake artists" and "functional music" on streaming services. Given licensing costs and the complex royalty flows in the music industry, it may even be that (paradoxically) if AI-generated music does not attract copyright protection, it is in a better position to undercut human-made copyright music.

There is very little research on the legal, economic, or ethical consequences of AI-generated music in the music industry. For example, its rise has been so rapid that it would almost certainly have been a major topic for discussion in the DCMS's inquiry into the economics of music streaming were it to be heard now. For context, this was published only 18 months ago. It is likely that it will be an area for IPO-commissioned research in the near future.

UK copyright law needs to keep up with these developments. However, to do this it needs to somehow satisfy the conflicting concerns of AI researchers whilst protecting the interests of musicians and rightsholders.