Mishcon de Reya page structure
Site header
Menu
Main content section
a long corridor with orange lights

Generative AI – Intellectual property cases and policy tracker

Case tracker

With businesses in various sectors exploring the opportunities arising from generative AI tools, it is important to be alive to the potential risks. In particular, the development and use of such tools raises several issues relating to intellectual property, with potential concerns around infringements of IP rights in the inputs used to train them, as well as in output materials. There are also unresolved questions of the extent to which works generated by AI should be protected by IP rights. These issues are before the courts in various jurisdictions, and are also the subject of ongoing policy and regulatory discussions.

In this tracker, we provide an insight on the various intellectual property cases relating to generative AI going through the courts, as well as anticipated policy and legislative developments.

Read more in our guides to Generative AI & IP and to the use of Generative AI generally.

Please sign up to receive regular updates.

Subscribe

This page was last updated on 5 December 2025

Loading

SNE, SGDL and SNAC v Meta

Syndicat national de l'édition (SNE), Société des Gens de Lettres (SGDL) and Syndicat national des auteurs et des compositeurs (SNAC) v Meta

Court cases

JurisdictionFrance

Press release 13 March 2025

Summary

Three associations acting on behalf of authors and publishers have brought proceedings against Meta in the 3rd Chamber of the Paris Judicial Court arising out of alleged use of copyrighted works, without authorisation of their authors and publishers, in order to train its GenAI model. This is the first action brought in France by rights holders in relation to the training of GenAI models. The plaintiffs demand copyright enforcement and the complete removal of the data repositories used to train the GenAI model.

Canadian News Media Companies v OpenAI

Toronto Star Newspapers limited, Metroland Media Group Ltd, Postmedia Network Inc, PNI Maritimes LP, The Globe and Mail Inc/Publications Global and Mail Inc, Canadian Press Enterprises Inc/Enterprises Presse Canadienne Inc., and Canadian Broadcasting Corporation/Société Radio-Canada v OpenAI, Inc; Open AI GP, LLC; OpenAI, LLC; OpenAI Startup Fund I, LP; OpenAI Startup Fund GP 1, LLC; OpenAI Startup Fund Management, LLC; OpenAI Global, LLC, OpenAI Opco, LLC; OAI Corporation; and OpenAI Holdings, LLC

Case reference

cv-24-00732231000CL

Court cases

JurisdictionCanada

Statement of Claim: 28 November 2024

Summary

This claim, brought by a range of leading Canadian media companies and news publishers, has been issued against OpenAI in the Ontario Superior Court of Justice. The claim is for a declaration that the various OpenAI defendants are jointly and severally liable for (i) infringing, authorizing and/or inducing infringement of copyright in various works published on the media companies' websites (ii) engaging in prohibited circumvention of technological protection measures; (iii) breaching the terms of use of the plaintiffs' various websites; and (iv) unjust enrichment at the expense of the plaintiffs.

This is the first case brought against OpenAI in Canada and represents a fresh jurisdiction where it is now facing allegations of copyright infringement and related claims. Proceedings have also been brought in Canada at the British Columbia Supreme Court by the Canadian Legal Information Institute against Caseway AI.

Summary

The guidance states that only the human created parts of a generative AI work are protected by copyright. Accordingly, only where a human author arranges AI-generated material in a sufficiently creative way that ‘the resulting work as a whole constitutes an original work of authorship’ or modifies AI-generated content ‘to such a degree that the modifications meet the standard for copyright protection,’ will the human-authored aspects of such works be potentially protected by copyright. 

This statement follows a decision by the USCO on copyright registration for Zarya of the Dawn ('the Work'), an 18-page graphic novel featuring text alongside images created using the AI platform Midjourney. Originally, the USCO issued a copyright registration for the graphic novel before undertaking investigations which showed that the artist had used Midjourney to create the images. Following this investigation (which included viewing the artist’s social media), the USCO cancelled the original certificate and issued a new one covering only the text as well as the selection, coordination, and arrangement of the Work’s written and visual elements. In reaching this conclusion, the USCO deemed that the artist’s editing of some of the images was not sufficiently creative to be entitled to copyright as a derivative work.

As part of its study of the copyright law and policy issues raised by AI systems, in August 2023, the USCO sought written comments from stakeholders on a number of questions. It had received over 10,000 comments by December 2023. The questions cover the following areas:

  1. The use of copyrighted works to train AI models – the USCO notes that there is disagreement about whether or when the use of copyrighted works to develop datasets is infringing. It therefore seeks information about the collection and curation of AI datasets, how they are used to train AI models, the sources of materials and whether permission by / compensation for copyright owners should be required.
  2. The copyrightability of material generated using AI systems – the USCO seeks comment on the proper scope of copyright protection for material created using generative AI. It believes that the law in the US is clear that protection is limited to works of human authorship but notes that there are questions over where and how to draw the line between human creation and AI-generated content. For example, a human's use of a generative AI tool could include sufficient control over the technology – e.g., through selection of training materials, and multiple iterations of prompts – to potentially result in output that is human-authored. The USCO notes that it is working separately to update its registration guidance on works that include AI-generated materials.
  3. Potential liability for infringing works generated using AI systems – the USCO is interested to hear how copyright liability principles could apply to material created by generative AI systems.  For example, if an output is found to be substantially similar to a copyrighted work that was part of the training dataset, and the use does not qualify as fair use, how should liability be apportioned between the user and the developer?
  4. Issues related to copyright – lastly, as a related issue, the USCO is also interested to hear about issues relating to AI-generated materials that feature the names of likeness, including vocal likeness, of a particular person; and also in relation to AI systems that produce visual works 'in the style' of a specific artist.

In July 2024, the USCO published Part 1 of its Report on Copyright and Artificial Intelligence, focusing on Digital Replicas (also called 'deepfakes'). Based on the input received, the USCO has concluded that a new federal law is needed to deal with unauthorised digital replicas, as existing laws do not provide sufficient legal redress. This would cover all individuals, not just celebrities. However, whilst the paper also notes that creators have concerns over AI outputs that deliberately imitate an artist's style, it does not recommend including style in the coverage of the new legislation at this time.    

Separately, a No Fakes Bill (Nurture Originals, Foster Art and Keep Entertainment Safe Bill) has also been proposed in the US Senate. The No Fakes Bill also proposes to enact federal protection for the voice and visual likeness of individuals. The Bill is endorsed by a number of associations representing performers and rights holders, and from within the creative community.

In January 2025, the USCO published Part 2 of its report, focused on copyrightability of outputs from using generative AI. The report concludes that outputs can only be protected by copyright where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an output, or a human makes creative arrangements or modifications of the output. However, it will not apply in the case of mere provision of prompts. The report also confirms that the use of AI to assist in the process of creating/including AI-generated material in a larger human-generated work may be protected by copyright.

In May 2025, the USCO published a 'pre-publication' version of Part 3 of its Report, focusing on generative AI training. The report considers the steps involved in creating and deploying a generative AI system which involve using copyrighted works in ways that implicate the right of reproduction including: data collection and creation; training; RAG; and production of outputs.  In relation to fair use, the Office notes that the responses it had received to its Notice of Inquiry were 'sharply divided'.  Given that generative AI involves a spectrum of uses and impacts, the Office notes that it is not possible to prejudge litigation outcomes but does offer the following analysis:

  • On the first factor, the Office expresses the view that training a generative AI foundation model on a large and diverse dataset will often be transformative but this will depend on the functionality of the model and how it is deployed. Meanwhile, the use of RAG is less likely to be transformative  where the purpose is to generate outputs that summarise/provide abridged versions of copyrighted works, as opposed to hyperlinks. The USCO forms the view that the knowing use of a dataset consisting of pirated or illegally accessed works should weigh against fair use without being determinative.
  • On the fourth factor, the effect of the use upon the potential market for or value of the copyrighted work (the most important factor), the Office identifies that where a model can produce substantially similar outputs that directly substitute for works in the training data, it can lead to lost sales. Even where outputs are not substantially similar, they can dilute the market for similar works in the training data, including by generating material stylistically similar to those works. The assessment of market harm will also depend on the extent to which copyright works can be licensed for AI training. 

 

The Generative AI Copyright Disclosure Bill

Legislative and policy developments

JurisdictionUS

Summary

Introduced by Democratic Representative Adam Schiff, The Generative AI Copyright Disclosure Act would require a notice to be submitted to the Register of Copyrights prior to a new generative AI system being released, providing information on all copyrighted works used in building or altering the training dataset. It would also apply retroactively to existing genAI systems.

The Bill has attracted widespread support from across the creative community including from industry associations and Unions such as the Recording Industry Association of America, Copyright Clearance Center, Directors Guild of America, Authors Guild, National Association of Voice Actors, Concept Art Association, Professional Photographers of America, Screen Actors Guild-American Federation of Television and Radio Artists, Writers Guild of America West, Writers Guild of America East, American Society of Composers, Authors and Publishers, American Society for Collective Rights Licensing, International Alliance of Theatrical Stage Employees, Society of Composers and Lyricists, National Music Publishers Association, Recording Academy, Nashville Songwriters Association International, Songwriters of North America, Black Music Action Coalition, Music Artist Coalition, Human Artistry Campaign, and the American Association of Independent Music.

UK approach to copyright and generative AI

Legislative and policy developments

JurisdictionUK

Consultation: 17 December 2024

Summary

The UK Government issued its much-anticipated consultation on Copyright and Artificial Intelligence in December 2024, with the deadline for interested parties to respond of 25 February 2025. This issue has been on the agenda since even before the surge of interest in generative AI following the public launch of ChatGPT in November 2022. Having consulted on the issue in 2021, the previous Government had initially decided to introduce a broad text and data mining exception to allow scraping of copyright-protected work for any commercial purpose (including training of AI tools), without providing any option for right holders to opt their works out. However, following significant opposition from across the creative industries, it later revised its approach to focus on attempting to broker a voluntary code of practice between AI tool developers and rights holder representatives.

With those code of practice discussions having failed to reach a resolution, the new Labour Government has now issued a fresh consultation in which it seeks to reach a balance between the competing interests, and to thereby unlock opportunities for AI training in the UK, whilst also ensuring protection for creative works (described by one Minister as a "win win"). Subject to the responses it receives to its consultation, the Government proposes again to introduce a text and data mining exception allowing copyright works to be used in training, but this time making it subject to rights reservation by right holders (i.e., an opt-out). This is intended to allow them to exercise control over their works by opting them out, or otherwise licensing them for AI training and obtaining payment for their use. Underpinning this would be a requirement of greater transparency from AI developers as to the material used to train their models, how they have acquired those materials and in relation to the content generated by their models. There would also need to be standardisation of opt-out mechanisms.

The consultation also considers a range of other issues such as protection for computer-generated works as well as infringing outputs, the temporary copies exception, the existing text and data mining exception for non-commercial research, labelling of AI outputs, use of AI in education, digital replicas, and other emerging issues. In relation to protection for computer-generated works, the Government's preferred position is for this protection to be removed, unless it is satisfied that there is evidence of the incentives this protection provides.

Read more

There have been a significant number of responses (over 11,000) to the consultation. Whilst a significant number have been made on behalf of the creative industries, the responses as a whole are likely to represent a broad range of viewpoints, with stakeholders having a range of both overlapping and diverging positions. It is likely to take some time for the Government to consider fully the responses, to conduct further engagement with stakeholders, and to draft appropriate legislation where this is concluded as necessary. Following an intervention by Parliament's Science, Innovation and Technology Committee, OpenAI and Google have published their responses to the consultation. OpenAI's response rejects the Government's preferred approach of a TDM exception with rights holder opt-out, favouring instead a broad TDM exception as set out in Option 2 in the Consultation document. In its response, Google supports an amended TDM exception with rights reservation for rights holders – the amendments however are significant in that it does not consider rights holders should be compensated for such uses in relation to content on the web, and does not support what it describes as "excessive transparency measures".

This issue has also been debated in attempts to introduce amendments in the Data (Use and Access) Bill relating to copyright and transparency which led to extensive 'ping-pong' between the two Houses of Parliament. The House of Lords finally accepted a version of the Bill proposed by the Government after securing concessions that it will publish a report on its proposals for copyright and AI within nine months of the Act receiving Royal Assent (which took place on 19 June 2025), and interim reports every six months thereafter. These reports must include information on the Government's approach to enforcement and analysis regarding AI models that have been trained outside of the UK.

The Government has put in place expert working groups to help deliver a solution which will support AI development while also ensuring robust protection for the creative industries.  According to the press release, those involved include the News Media Association, Alliance for IP, Sony Music Entertainment, Publishers Association, The Guardian, OpenAI, Amazon and Meta.

Filter

Results

Type

Jurisdiction

Topic

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else