The voice of change? The BPI’s recent action against Jammable highlights the conflict between music rights holders and generative AI providers

April 30, 2024
Music production

Partner Nick Eziefula, Associate Andrew Wilson-Bushell and Trainee Solicitor Catherine Clover discuss the legal issues surrounding generative AI and music rights holders, following the BPI publicly threatening legal action against Jammable, in Music Week.

The UK’s recorded music trade association, the British Phonographic Industry (BPI), which represents record companies, has publicly threatened legal action over AI voice cloning technology Jammable, previously known as Voicify. This comes as the music industry continues to grapple with the rapid growth of generative AI technology. Together with other recent AI copyright infringement cases, this dispute might signal the beginning of the end of the “Wild West” era of unlicensed AI music generation.

Jammable uses voice models to allow its customers to strip out vocals from recorded music and insert AI generated vocals mimicking the sound of an artist’s voice. Jammable has built up an extensive library of thousands of voice models including Rihanna, Drake and Taylor Swift. The BPI argues that this involves using recordings owned by its members to train the model and that, without permission from those owners, Jammable is infringing copyright. 

The BPI’s case is persuasive. For AI models to generate output, they must be trained using extensive data sets. It is hard to see how AI cloning of well-known artists’ voices can be achieved without using recordings of those voices. In the UK, the sound of a voice cannot be copyright protected but a recording of that voice might well be and, in the case of well-known artists, the recordings are highly likely to be copyright works, owned by the artist or their label. Broadly speaking, the use of a copyright work without the owner’s permission will be copyright infringement unless any of the relevant exceptions or defences to copyright infringement apply. The question is whether any of those might be relevant to the use of recordings as training data in this way.

Although UK law does involve an exception for text and data mining, it is currently limited in scope and unlikely to permit broad use of this kind. Exceptions for parody, caricature and pastiche may be more relevant to this scenario, but they all require “fair dealing”, and only apply to limited use, for particular purposes (such as for criticism, review, or reporting events). Whilst we do not yet have a definitive case applying these principles to the AI training scenario, semi-analogous cases in other situations indicate that courts do not apply these exceptions to infringement lightly, nor often.

The legal position is complicated by the fact that international laws in this area are not fully cohesive. In the EU, similar copyright exceptions apply, although the EU approach to text and data mining, whilst potentially broader, involves an opt-out mechanism, enabling copyright owners to determine whether the exception is even available in relation to their works.  

The US law concept of “fair use” is typically more permissive than the UK’s “fair dealing”. As of yet, we have no concrete guidance or case law applying this to the use of AI training data but that may soon change. Last year, the New York Times sued OpenAI and Microsoft in New York for copyright infringement for allegedly using the newspaper’s content to train its AI tools. The case is ongoing and the outcome will be significant.

Similarly, in the UK, Getty Images Inc. took legal action against Stability AI for allegedly using millions of copyright images to train its AI model, Stable Diffusion. Getty has accused Stability AI not just of copyright infringement but also breach of trade mark and database rights and passing-off (a form of claim based on misappropriation of a brand identity).  

While we await decisions in these cases, some direction has been given in Europe by the passing of the EU AI Act. The new rules will require “general purpose AI models” to meet transparency requirements and to comply with EU copyright law. For example, generative AI platforms will need to publish a summary of information about the content that was used to train their model, which will likely deter unauthorised use and facilitate copyright owners’ efforts to pursue infringers.

As far as we are aware, no formal legal proceedings have yet been issued by the BPI as of the date of this article but, by issuing press releases about its intended action, the BPI has effectively taken the matter straight to the court of public opinion. This confident approach seems to have had an impact, as Voicify has changed its branding to Jammable and big-name voice-models such as those relating to Rihanna seem to have been removed from the platform.

The music industry has long had to adapt to changes in technology. Generative AI is the latest step-change, but lessons learned from the last one are pertinent. The widespread adoption of file-sharing technology was disruptive almost to the point of destruction. The recorded music business was eventually restored through a multi-faceted approach: clamping down on unauthorised use through legal action (against platforms such as the infamous Pirate Bay); legislative change; and the development and licensing of lawful services that offer consumers a quality customer experience. In summary: enforce, enact, embrace, and evolve. We are beginning to see examples of such enforcement and enactment in relation to generative AI. We can expect to see licensing regimes that embrace the change soon, as business models evolve.

Nick, Andrew and Catherine's article was published in Music Week, 29 April 2024.

Nick EziefulaNick Eziefula
Nick Eziefula
Nick Eziefula
-
Partner
Andrew Wilson-BushellAndrew Wilson-Bushell
Andrew Wilson-Bushell
Andrew Wilson-Bushell
-
Associate
Catherine CloverCatherine Clover
Catherine Clover
Catherine Clover
-
Trainee Solicitor

News & Insights