Mishcon de Reya page structure
Site header
Main menu
Main content section

Generative AI – Intellectual property cases and policy tracker

Case tracker

With businesses in various sectors exploring the opportunities arising from the explosion in generative AI tools, it is important to be alive to the potential risks. In particular, the use of generative AI tools raises several issues relating to intellectual property, with potential concerns around infringements of IP rights in the inputs used to train such tools, as well as in output materials. There are also unresolved questions of the extent to which works generated by AI should be protected by IP rights. These issues are before the courts in various jurisdictions, and are also the subject of ongoing policy and regulatory discussions.

In this tracker, we provide an insight on the various intellectual property cases relating to generative AI going through the courts (focusing on a series of copyright cases in the US and UK), as well as anticipated policy and legislative developments.

Read more in our Guides to Generative AI & IP and to the use of Generative AI generally.

Please sign up to receive regular updates.

Subscribe

This page was last updated on 5 November 2024.

Court Cases

12 February 2024

The New York Times v Microsoft and OpenAI

The New York Times Company v (1) Microsoft Corporation, (2) OpenAI, Inc., (3) OpenAI LP, (4) OpenAI GP, LLC, (5) OpenAI, LLC, (6) OpenAI Opco LLC, (7) OpenAI Global LLC, (8) OAI Corporation, LLC, (9) OpenAI Holdings, LLC

US

CASE 1:23-cv-1195

Complaint: 27 December 2023

Motion to Intervene, and Dismiss, Stay or Transfer: 23 February 2024

Motion to Dismiss: 26 February 2024 

Response to Motion to Intervene and Dismiss, Stay or Transfer by OpenAI: 26 February 2024

Response to Motion to Intervene and Dismiss, Stay or Transfer by The New York Times: 1 March 2024

Motion to Dismiss by Microsoft: 4 March 2024

Reply to Opposition to Motion to Intervene and Dismiss, Stay or Transfer: 8 March 2024

Plaintiff's Memorandum of Law in Opposition to OpenAI's Partial Motion to Dismiss: 11 March 2024

Reply Memorandum of Law in Support of Motion by OpenAI: 18 March 2024

Plaintiff's Memorandum of Law in Opposition to Microsoft's Partial Motion to  Dismiss: 18 March 2024

Reply Memorandum of Law in Support re Motion to Dismiss filed by Microsoft Corporation: 25 March 2024

Opinion & Order denying California Plaintiff's motions to intervene for purpose of transferring, staying or dismissing the New York actions: 1 April 2024

Notice of Interlocutory Appeal filed by California Plaintiffs: 15 April 2024

Notice of Motion and Motion for Leave to File First Amended Complaint: 20 May 2024

Letter Motion to Compel New York Times to Produce Documents: 23 May 2024

Letter Response in Opposition to Motion to Compel New York Times to Produce Documents: 28 May 2024

Opposition Brief filed by Microsoft Corporation: 3 June 2024

Response to Motion for Leave to File First Amended Complaint and Conditional Cross-Motion filed by OpenAI: 3 June 2024

Motion to consolidate case with Daily News case filed by OpenAI: 13 June 2024

Memorandum of law in support: 13 June 2024

Brief re Motion to consolidate filed by Microsoft: 14 June 2024

Response to Motion to Consolidate: 27 June 2024 

Reply Memorandum of Law in Support re Motion to Consolidate: 3 July 2024

First Amended Complaint: 12 August 2024

Motion to consolidate case with claim by The Center for Investigative Reporting filed by Defendants: 4 October 2024

Response to Motion to Consolidate cases: 18 October 2024

Reply Memorandum of Law in support of Motion to Consolidate: 25 October 2024

Order granting Consolidation: 31 October 2024

Summary

This highly publicised case has been brought by The New York Times against Microsoft and OpenAI in the US District Court Southern District of New York, relating to ChatGPT (including associated offerings), Bing Chat and Microsoft 365 Copilot. It follows a period of months during which the NYT said it attempted to reach a negotiated agreement with Microsoft/OpenAI.

The Complaint raises arguments of large-scale commercial exploitation of NYT content, through the training of the relevant models (including GPT-4 and the next generation GPT-5), noting that the GPT LLMS have also 'memorized' copies of many of the woks encoded into their parameters.  There are extensive exhibits (69 exhibits, comprising around 2000 pages) attached to the Complaint. Exhibit J in particular contains 100 examples of output from GPT-4 (as a 'small fraction') based on prompts in the form of a short snippet from the beginning of an NYT article.  The example outputs are said to recite NYT content verbatim (or near-verbatim), closely summarise it, and mimic its expressive style (and also wrongly attribute false information - hallucinations - to NYT).

The Complaint also focuses on synthetic search applications built on the GPT LLMs which display extensive excepts or paraphrases of contents of search results, including NYT content, that may not have been included in the model's training set (noting that this contains more expressive content from the original article than would be the case in a traditional search result, and without the hyperlink to the NYT website).

The claims are for direct copyright infringement,  vicarious copyright infringement, contributory copyright infringement, DMCA violations, unfair competition by misappropriation, and trade mark dilution.

On 26 February 2024, OpenAI filed a Motion to Dismiss in relation to parts of the claim to direct copyright infringement (re conduct occurring more than 3 years ago), as well as the claims relating to contributory infringement, DMCA violations and state common law misappropriation. In particular, OpenAI alleges that the 'Times paid someone to hack OpenAI's products' and that it took 'tens of thousands of attempts to generate the highly anomalous results' in Exhibit J to the Complaint, including by targeting and exploiting a bug (which OpenAI says it has committed to addressing) in violation of its terms of use. OpenAI goes on to categorise the key dispute in the case as to whether it is fair use to use publicly accessible content to train generative AI models to learn about language, grammar and syntax, and to 'understand the facts that constitute humans' collective knowledge'. The New York Times has categorised OpenAI's motion as grandstanding, with an attention-grabbing claim about 'hacking' that is both irrelevant and false.

Microsoft filed its Motion to Dismiss parts of the claim on 4 March 2024 focusing on (1) the allegation that Microsoft is contributorily liable for end-user infringement (2) violation of DMCA copyright management information and (3) state law misappropriation torts. Drawing an analogy with earlier disruptive technologies, the Motion states "copyright law is no more an obstacle to the LLM than it was to the VCR (or the player piano, copy machine, personal computer, internet, or search engine)"- its point is that the US Supreme Court has previously rejected liability merely based on offering a multi-use product that could be used to infringe. It further states that Microsoft "looks forward to litigating the issues in this case that are genuinely presented, and to vindicating the important values of progress, learning and the sharing of knowledge".

The Plaintiffs filed an Amended Complaint on 12 August 2024 (the amendments add a further approximately 7 million works to the suit).

The case has been consolidated with The Daily News complaint and also with the claim brought by The Center for Investigative Reporting.

Impact

The opening words of the complaint stress the importance of independent journalism for democracy - and the threat to the NYT's ability to provide that service by the use of its works to create AI products. It further highlights the role of copyright in protecting the output of news organisations, and their ability to produce high quality journalism.

The NYT website is noted in the Complaint as being the most highly represented proprietary source of data in the Common Crawl dataset, itself the most highly weighted dataset in GPT-3. Given the previous attempt at negotiations referred to in the complaint, it will be interesting to see if the launch of this complaint will lead to more fruitful licence negotiations, or whether this case will continue to trial (in which case, it should be tracked alongside the other complaints against OpenAI and Microsoft).

OpenAI's position is that 'training data regurgitation' (or memorisation) and hallucination are 'uncommon and unintended phenomena'. Memorisation is a problem that OpenAI say that they are working hard to address, including through sufficiently diverse datasets. Meanwhile, it points to its partnerships with other media outlets.

11 February 2024

Chabon & ors v Meta Platforms, Inc

(1) Michael Chabon (2) David Henry Hwang (3) Matthew Klam (4) Rachel Louise Snyder (5) Ayelet Waldman v Meta Platforms Inc

US

Case: 4:23-cv-04633

Amended Complaint: 5 October 2023

Order granting Joint Motion to Dismiss (for reasons given in Kadrey v Meta Platforms): 20 November 2023

Order consolidating cases against Meta: 7 December 2023

Summary

The same set of authors, playwrights and screenwriters in the third set of proceedings against OpenAI also brought a claim against Meta in the US District Court for the Northern District of California.  This case focused on Meta's LLaMa (Large Language Model Meta AI) and noted Meta's statements that LLaMa was trained using books including from the Books3 section of ThePile dataset (assembled from content available in 'shadow library' websites (including Bibliotik)), which the Plaintiffs contended includes their copyright works.

Again, the claims include direct and vicarious copyright infringement, violations of the DMCA, violations of California unfair competition law, negligence and unjust enrichment. 

Impact

Developments in all of these cases should be monitored closely. The case has now been consolidated with another claim against Meta (Kadrey et al v Meta).

9 February 2024

Kadrey & ors v Meta Platforms, Inc

(1) Richard Kadrey (2) Sarah Silverman & (3) Christopher Golden v Meta Platforms, Inc

US

Case C 3:23-cv-03417

Complaint: 7 July 2023

Motion to dismiss by Meta: 18 September 2023

Plaintiffs' Opposition to Meta's Motion to dismiss: 18 October 2023

Reply re Motion to Dismiss: 1 November 2023

Order on Motion to Dismiss: 20 November 2023

Amended Complaint: 11 December 2023

Answer to Amended Complaint: 10 January 2024 

Motion to relate with Huckabee action: 16 January 2024

Order granting motion to relate with Huckabee action: 23 January 2024

Order re voluntary dismissal and consolidation with the Huckabee action: 5 July 2024

(Corrected) Second Consolidated Amended Complaint: 9 September 2024

Answer to Second Consolidated Amended Complaint filed by Meta: 16 September 2024

Unopposed Motion to consider whether case should be related to Farnsworth: 3 October 2024 

Summary

Plaintiffs have brought a class action against Meta relating to its LLaMA (Large Language Model Meta AI) product in the US District Court for the Northern District of California. The claim notes Meta's statements that LLaMa was trained using books including from the Books3 section of ThePile dataset (assembled from content available in 'shadow library' websites (including Bibliotik)), which the Plaintiffs content includes their copyright works.

The claims (as originally drafted) included direct and vicarious copyright infringement, violations of the DMCA, violations of California unfair competition law, negligence and unjust enrichment. 

Meta filed a Motion to Dismiss parts of the claim – the Motion to Dismiss only applies partially to the claim of direct infringement. On this, Meta's Motion states: "Use of texts to train LLaMA to statistically model language and generate original expression is transformative by nature and quintessential fair use—much like Google’s wholesale copying of books to create an internet search tool was found to be fair use in Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015)." Clearly, the issue of fair use is going to be central to this debate.

On Thursday 9 November 2023, US District Judge Vince Chhabria indicated that he would grant Meta's motion to dismiss the claims that content generated by Meta's LLaMA tool infringes their copyright (and also that LLaMA is itself an infringing work), but would give the plaintiffs permission to amend most of their claim.

On 11 December 2023, the Plaintiffs filed their amended Complaint, on the basis of direct copyright infringement.

The Plaintiffs filed a Second Amended Complaint on 9 September 2024.

Meta had sought to challenge its CEO Mark Zuckerberg being deposed but the Court denied its motion on 24 September 2024. The Plaintiffs had established that he was the chief decision maker and policy setter for Meta's generative AI brand and the development of the large language models at issue in the action.

Impact

The claim has been consolidated with that brought by a number of authors including Michael Chabon, and also with the Huckabee action against Meta which has been transferred from the US District Court for the Southern District of New York to the US District Court for the Northern District of California.

8 February 2024

Andersen v Stability AI

(1) Sarah Andersen, (2) Kelly McKernan & (3) Karla Ortiz v (1) Stability AI Ltd, (2) Stability AI, Inc, (3) Midjourney, Inc, (4) Deviantart, Inc 3. 

US

CASE 3:23-CV-00201

Complaint: 13 January 2023

Defendants filed a number of motions to dismiss and/or Anti-SLAPP Motions to Strike: 18 April 2023

Plaintiffs opposed these motions: 2 June 2023

Defendants filed motions to dismiss and/or motions to dismiss and strike: 3 July 2023

Judge Orrick indicated he would dismiss most of the claims brought by the Plaintiffs against the Defendants with leave to amend: 19 July 2023

Order by Judge William H Orrick: 30 October 2023

Amended Complaint: 29 November 2023

Motion to  Strike (DeviantArt's Motion to Renew its Special Motion to Strike (anti-SLAPP)): 20 December 2023

Opposition/Response re anti-SLAPP motion: 10 January 2024

Reply re anti-SLAPP motion: 17 January 2024

Motion to Dismiss First Amended Complaint filed by Midjourney: 8 February 2024

Motion to Dismiss First Amended Complaint filed by Stability AI: 8 February 2024

Motion to Dismiss First Amended Complaint filed by DeviantArt: 8 February 2024

Motion to Dismiss First Amended Complaint filed by Runway: 8 February 2024

Order denying Motion to Strike by Judge William H. Orrick: 8 February 2024

Opposition/Response re Stability AI's Motion to Dismiss filed by Plaintiffs: 21 March 2024

Opposition/Response re Runway AI's Motion to Dismiss filed by Plaintiffs: 21 March 2024

Opposition/Response re DeviantArt's Motion to Dismiss filed by Plaintiffs: 21 March 2024

Opposition/Response re  Midjourney's Motion to Dismiss filed by Plaintiffs: 21 March 2024

Reply re Motion to Dismiss Plaintiffs' First Amended Complaint filed by MidJourney: 18 April 2024

Reply re Motion to Dismiss Plaintiffs' First Amended Complaint filed by StabilityAI: 18 April 2024

Reply re Motion to Dismiss Plaintiffs' First Amended Complaint filed by DeviantArt: 18 April 2024

Reply re Motion to Dismiss Plaintiffs' First Amended Complaint filed by Runway AI: 18 April 2024

Procedures and tentative rulings for hearing: 7 May 2024

Order granting in part and denying in part motions to dismiss First Amended Complaint: 12 August 2024

Administrative motion for clarification or in the alternative leave to seek reconsideration of order filed by Midjourney: 5 September 2024

Opposition/response re Motion for Clarification filed by Plaintiffs: 9 September 2024

Reply re Motion for Clarification filed by Midjourney: 12 September 2024

Motion to Strike Reply filed by Plaintiffs: 13 September 2024

Order denying Midjourney's Motion for Clarification or Reconsideration: 30 September 2024 

Second Amended Complaint: 31 October 2024

Summary

This is a case brought against Stability AI (and other AI tools such as Midjourney), this time by a group of visual artists acting as individual and representative plaintiffs. The claim was filed in the US District Court for the Northern District of California.

The Plaintiffs have filed for copyright infringement, Digital Millennium Copyright Act violations, and related state law claims. They allege that the Defendants used their (and other artists’) works to train Stable Diffusion without obtaining their permission. According to the Plaintiffs, when the Defendants’ AI tools create "new images" based entirely on the training images, they are creating an infringing derivative work.

The Plaintiffs seek to bring their suit as a class action on behalf of "millions of artists" in the U.S. that own a copyright in any work that was used to train any version of the AI tools. 

On 19 July 2023, Judge Orrick indicated in a tentative ruling that he would dismiss almost all of the claims against the Defendants but would give the Plaintiffs leave to amend. Of particular note is that the Judge stated that the Plaintiffs need to differentiate between the Defendants and elaborate on what role each of the Defendants played with respect to the allegedly infringing conduct. The Judge was sceptical as to the extent the AI tool relied on the Plaintiffs' works to generate the output images as the AI model contained billions of images. He also expressed doubts as to whether the output images were substantially similar to the Plaintiff's original works.

On 30 October 2023, Judge Orrick's order was published, dismissing parts of the claim. However, the Plaintiffs were given leave to amend, with the Judge requiring them to clarify their infringement claims. Stability Ai's motion to dismiss the claim against it for direct copyright infringement was denied.

On 29 November 2023, the Plaintiffs filed their Amended Complaint, which included a number of new plaintiffs joining the complaint.

On 8 February 2024, Judge Orrick denied the Defendants' motion to strike under California's anti-SLAPP (strategic lawsuits against public participation) statute which had been directed solely at the Plaintiffs' right of publicity claims, on the basis that the Complaint and Amended Complaint fell within the anti-SLAPP statute's public interest exception.

On 7 May 2024, Judge Orrick issued a number of tentative rulings in advance of a hearing on 8 May.

On 12 August 2024, Judge Orrick issued his ruling in which he confirmed the following:

  • The allegations of direct and induced copyright infringement are sufficient to proceed. The Plaintiffs alleged that Stable Diffusion is built to a significant extent on copyrighted works and that the way the product operates necessarily invokes copies or protected elements of those works. The plausible inferences were that Stable Diffusion by operation by end users creates copyright infringement and was created to facilitate that infringement by design.
  • All DMCA claims are dismissed with prejudice (including in line with the opinion of Judge Tigar in Doe I v GitHub, Inc).
  • The claims for unjust enrichment are dismissed but the Plaintiffs have been given leave to make one last attempt to state an unjust enrichment claim.
  • Midjourney's motion to dismiss false endorsement and trade dress claims is denied.
  • The breach of contract claim against DeviantArt is dismissed with prejudice.

On 31 October 2024, the Plaintiffs filed their Second Amended Complaint.

Impact

In this case, one of the Plaintiffs' arguments is that AI tools which create art “in the style of” an existing artist are infringing derivative works. Copyright infringement requires copying, so the Plaintiffs will have to convince the court that a completely new piece of art “in the style of” an existing artist could be categorised as “copying” that artist.

Summary

In addition to its claim against Stability AI in the UK, Getty Images has brought proceedings in the US District Court of Delaware.

Getty Images' complaint is for copyright infringement, providing false copyright management information, removal or alteration of copyright management information, trademark infringement, unfair competition, trademark dilution, and related state law claims.

In response to Getty Images' amended complaint, Stability AI filed a motion to dismiss for lack of personal jurisdiction, inability to join a necessary party, and failure to state a claim, or alternatively, a motion to transfer the lawsuit to the US District Court for the Northern District of California.

Impact

This case should be tracked alongside the action in the UK, though different issues may arise for consideration given potential divergences e.g., in relation to defences to copyright infringement.

6 February 2024

Getty Images v Stability AI

(1) Getty Images (US), Inc. (2) Getty Images International U.C. (3) Getty Images (UK) Ltd (4) Getty Images Devco UK Ltd (5) Stockphoto LP (6) Thomas M. Barwick, Inc v Stability AI Ltd 

UK

Claim No. IL-2023-000007

Claim Form: 16 January 2023

Particulars of Claim: 12 May 2023

Judgment on Stability AI's summary judgment/strike out application: 1 December 2023

Defence: 27 February 2024

Reply: 26 March 2024

Amended Particulars of Claim: 12 July 2024

Trial date: 5 day window starting on 9 June 2025

Getty Images' Response to Request for Further Information: 20 August 2024

Amended Defence: 2 September 2024

Amended Reply: 13 September 2024

Summary

This claim has been brought by Getty Images against AI image generator Stability AI in the UK High Court.

Getty Images' claim (as summarised in its press release when commencing the claim) is that, through its Stable Diffusion model (under the name DreamStudio), Stability AI has "unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI's commercial interests and to the detriment of content creators".

The claims relate to copyright infringement, database right infringement, and trade mark infringement and passing off.

In brief, Getty Images claims that Stable Diffusion was trained using various subsets of the LAION-5B Dataset which was created by scraping links to photos and videos and associated captions from various websites: Getty Images claims that Stable Diffusion 1.0 was trained using around 12 million visual assets (of which around 7.3 million are copyright works) from Getty Images websites. It further claims that Stable Diffusion 2.0 was trained using around 7.5 million visual assets (of which around 4.4 million are copyright works) from Getty Images websites.

Getty Images also claims that in some cases the synthetic image produced by a user comprises a substantial part of one or more of its copyright works and/or visual assets, suggesting that Stable Diffusion sometimes memorises and generates very similar images to those used to train it. In some cases, the synthetic images produced bear the GETTY IMAGES and ISTOCK signs as a watermark.

Getty Images seeks to restrain the Defendant from doing a number of acts in the UK, without a written licence or agreement from Getty Images.

Stability AI applied for summary judgment / strike out in respect of certain aspects of Getty Images' claim. In particular, it argued that, as the evidence indicated that the training and development of Stable Diffusion took place outside the UK, the claim relating to copyright and database right infringement in that process was bound to fail. On 1 December 2023, the Court rejected Stability AI's application. Whilst the evidence referred to would on its face provide strong support for a finding that no development or training had taken place in the UK, there was other evidence pointing away from that conclusion, as well as a number of unanswered questions and inconsistencies in the evidence. Accordingly, the Court allowed that claim to proceed to trial, alongside a claim for secondary infringement of copyright which again the Court concluded could not be determined on a summary basis. 

On 27 February 2024, Stability AI filed its Defence. In summary, it denies that:

  • Development and training of the Stable Diffusion models infringed any of Getty Images' IP rights on the basis that the models were trained and developed outside the UK.
  • Making the Stable Diffusion model checkpoints available for download on GitHub or Hugging Face, or for use via DreamStudio, involves any acts of secondary infringement (because Stable Diffusion is not an infringing copy, is not an article, and has not been imported into the UK by Stability).
  • Use of Stability Diffusion by users gives rise to claims of infringement.  In particular, it argues that the examples of infringing outputs relied upon were generated by 'wilful contrivance using prompts corresponding exactly or substantially to captions' for Getty Images' works. It further asserts that the act of generating outputs is that of the user (over whom it has no control or knowledge of its prompts), not Stability; it has not made any use of the Getty trade marks in the course of trade; and it is entitled to rely upon caching and hosting safe harbours.

Interestingly, Stability AI also assert that to the extent that any images do include any element of a copyright work, it is possible to rely upon the fair dealing defence for the purposes of pastiche (a defence which has not yet been the subject of significant judicial commentary, other than in the Shazam case relating to Only Fools and Horses).   

Impact

As noted by Peter Nunn in an article in The Times:

"If Getty Images is successful in the UK claim, the court could award it substantial damages and grant an injunction preventing Stability AI from continuing to use the copyright works of Getty Images. This could have knock-on effects, deterring other AI innovators from scraping the internet to use content without the owners’ consent, but also prompting governments to speed up changes to their intellectual property laws so as to permit greater use of protected works in the training of AI programmes."

Peter Nunn discusses AI 'plagiarism' in The Times (mishcon.com)

 

4 February 2024

Huckabee & ors v Bloomberg

(1) Mike Huckabee (2) Relevate Group (3) David Kinnaman (4) TSH Oxenreider (5) Lysa Terkeurst (6) John Blase v (1) Meta Platforms, Inc. (2) Bloomberg L.P. (3) Bloomberg Finance L.P. (4) Microsoft Corporation (5) The Eleutherai Institute

US

Case: 1:23-cv-09152

Complaint: 17 October 2023

Letter re Bloomberg's proposed Motion to Dismiss: 15 December 2023

Letter re Opposition to Bloomberg's proposed Motion to Dismiss: 22 December 2023

Notice of Voluntary Dismissal re The Eleutherai Institute: 28 December 2023

Notice severing and transferring claims against Meta and Microsoft to US District Court for the Northern District of California: 28 December 2023

First Amended complaint against Bloomberg Finance: 24 January 2024

Letter re Bloomberg's proposed Motion to Dismiss: 31 January 2024

Motion to Dismiss by Bloomberg (Memorandum of Law): 22 March 2024

Plaintiffs' Opposition to Motion to Dismiss: 19 April 2024

Reply Memorandum of Law in Support of Motion: 3 May 2024

Summary

There have been some changes to the parties in this case, with the complaint against The Eleutherai Institute being voluntarily dismissed and the complaints against Meta and Microsoft severed and transferred.

Former Presidential Candidate and former Governor of Arkansas Mike Huckabee and a group of other plaintiffs have brought a class action against Meta, Bloomberg, Microsoft and The Eleutherai Institute in the United States District Court Southern District of New York. The complaint focuses on EleutherAI's dataset called 'The Pile' which includes in its data sources, 'Books 3', a dataset of a large collection (said to be approximately 18,000) of pirated ebooks.  The complaint notes that The Pile, and specifically Books3, was a popular training data set for companies developing AI technology, including the Defendants in this case.

As in other cases, the complaint alleged direct copyright infringement, vicarious copyright infringement, DCMA claims (removal of copyright management information), conversion, negligence, and unjust enrichment.

The Plaintiffs have since voluntarily dismissed the complaint against The Eleutherai Institute, and the complaints against Meta and Microsoft have been severed and transferred to California. In the Amended Complaint filed in January 2024, the Plaintiffs have withdrawn their indirect copyright infringement, DCMA and state-law claims, leaving the direct copyright infringement claim to be argued. 

Impact

This is the first case involving Bloomberg, which the complaint notes launched the world's first LLM built from scratch for finance. The complaint notes that Bloomberg had stated that it would not use the Books3 dataset used to training future versions of BloombergGPT, but further notes that LLM training is iterative and builds on prior versions, with the Plaintiff's works 'baked in' already.  

2 February 2024

J.Doe 1 and J.Doe 2 v Github, Microsoft and OpenAI

J. DOE 1 and J. DOE 2, individually and on behalf of all others similarly situated, Individual and Representative Plaintiffs v. (1) Github, Inc. (2) Microsoft Corporation; (3) OpenAI, Inc.; (4) OpenAI, L.P.; (5) OpenAI Gp, L.L.C., (6) OpenAI Opco, L.L.C. (7) OpenAI Startup Fund Gp I, L.L.C.; (8) OpenAI Startup Fund I, L.P.; (9) OpenAI Startup Fund Management, LLC   

US

Case 3:22-cv-06823

Complaint: 3 November 2022

Open AI motion to dismiss: 26 January 2023

Microsoft and Github's motion to dismiss: 26 January 2023

Plaintiffs' amended complaint: 8 June 2023

OpenAI motion to dismiss amended complaint: 29 June 2023

Microsoft and Github motion to dismiss amended complaint: 29 June 2023 

Amended Complaint: 21 July 2023

Opposition/Response to Motion to Dismiss: 27 July 2023

Reply by Github, Microsoft: 10 August 2023

Reply by OpenAI: 10 August 2023

Order granting in part, denying in part Motion to Dismiss: 3 January 2024

Second Amended Complaint: 25 January 2024

Motion to Dismiss Second Amended Complaint: 28 February 2024

Opposition/Response re Github and Microsoft's Motion to Dismiss Portions of the Second Amended Complaint in Consolidated Actions filed by Plaintiffs: 27 March 2024

Opposition/Response re OpenAI's Motion to Dismiss Portions of the Second Amended Complaint in Consolidated Actions filed by Plaintiffs: 27 March 2024

Reply filed by Github and Microsoft: 10 April 2024

Reply filed by OpenAI: 10 April 2024

Order denying Plaintiffs' Motion for Reconsideration re Order on Motion to Dismiss: 15 April 2024

Order granting in parts denying in part Motion to Dismiss: 24 June 2024

Answer to second Amended Complaint by OpenAI: 22 July 2024

Answer to second Amended Complaint by Microsoft: 22 July 2024 

Answer to second Amended Complaint by GitHub: 22 July 2024 

Motion for leave to appeal: 24 July 2024

Opposition/Response re Motion for Leave to Appeal filed by Github, Microsoft: 21 August 2024

Opposition/Response re Motion for Leave to Appeal filed by OpenAI: 21 August 2024

Reply re Motion for Leave to Appeal to Github and Microsoft filed by Plaintiffs: 11 September 2024

Reply re Motion for Leave to Appeal to OpenAI filed by Plaintiffs: 11 September 2024

Order granting Motion to Certify Order for Interlocutory Appeal and Motion to Stay pending appeal filed by Plaintiffs: 27 September 2024

Summary

This class-action brought in the US District Court for the Northern District of California targets both Copilot and OpenAI's Codex tool, which provides the technology underlying Copilot. Copilot helps developers write code by generating suggestions based on what it has learned from code in the public domain.  

The complaint (as originally filed) focuses on four key areas:

  • An allegation that Copilot violates provisions of the Digital Millennium Copyright Act by ingesting and distributing code snippets (copyrighted information) without including the licence terms, copyright notice and author attribution.
  • An allegation that, by not complying with open licence notices, Copilot breaches the conditions of such licences by which the original code had been made available to Copilot/Codex.
  • An allegation that Copilot passes off code as an original creation and therefore GitHub, Microsoft and OpenAI have been unjustly enriched by Copilot's subscription based service. This is a claim for unlawful competition.
  • An allegation that Github violates the Class's rights under the Californian Privacy Act, Github Privacy Statement and/or the Californian Constitution by inter alia sharing the Class's sensitive personal information; creating a product that contains personal data GitHub cannot delete, alter nor share with the applicable Class member; and selling the Class's personal data.

The Plaintiffs are seeking damages and injunctive relief.

The Defendants have alleged that the complaint lacks standing and have filed for the complaint to be dismissed. After being granted leave to amend their complaint, the Plaintiffs filed an amended complaint in June 2023, which largely resembled their initial complaint but including examples of licensed code owned by three of the Plaintiffs that has been output by Copilot, arguing that this demonstrates the Defendants removed their Copyright Management Information and emitted their code in violation of their open-source licences. On 3 January 2024, the Court granted GitHub's motions to dismiss in part. In particular, the Judge held that the remaining two Plaintiffs had not established a 'particular personalized injury' to confer standing for damages, though this was satisfied for the three Plaintiffs referred to above. The Judge also held that the state law claims of intentional and negligent interference with prospective economic relations, unjust enrichment, negligence and unfair competition are pre-empted by the Copyright Act. The claims under the DCMA were also dismissed with leave to amend.

On 24 June 2024, the Court granted an order granting in part the Defendants' Motion to Dismiss in relation to the remaining claims in the Second Amended Complaint. The Court has dismissed the DMCA complaint (with prejudice) and their complaint for unjust enrichment and punitive damages.  However, it has allowed the Plaintiffs' breach of contract claim for violation of open-source licenses to proceed.

On 24 July 2024, the Plaintiffs sought to certify for interlocutory appeal the issue relating to the DMCA claim.

The Court has certified its order dismissing the DMCA claims for interlocutory appeal as it involves a 'controlling question of law', there is substantial ground for difference of opinion on the issue and the appeal is likely to materially advance the ultimate outcome of the litigation.

Impact

The open source community will be watching this case with particular interest.

In its motion to dismiss GitHub draws attention to its Terms of Service with respect to ownership of code generated by GitHub Copilot. This is simplified in their FAQ section (see 'Does GitHub own the code generated by GitHub Copilot?) where GitHub suggests that "Copilot is a tool, like a compiler or pen" and, as a result, "the code you write with GitHub Copilot's help belongs to you".

1 February 2024

Concord Music Group & ors v Anthropic PBC

Concord Music Group, Inc.; Capitol Cmg, Inc. D/B/A Ariose Music, D/B/A Capitol Cmg Genesis, D/B/A Capitol Cmg Paragon, D/B/A Greg Nelson Music, D/B/A Jubilee Communications, Inc., D/B/A Meadowgreen Music Company, D/B/A Meaux Hits, D/B/A Meaux Mercy, D/B/A River Oaks Music, D/B/A Shepherd’s Fold Music, D/B/A Sparrow Song, D/B/A Worship Together Music, D/B/A Worshiptogether.com Songs; Universal Music Corp. D/B/A Almo Music Corp., D/B/A Criterion Music Corp., D/B/A Granite Music Corp., D/B/A Irving Music, Inc., D/B/A Michael H. Goldsen, Inc., D/B/A Universal – Geffen Music, D/B/A Universal Music Works; Songs Of Universal, Inc. D/B/A Universal – Geffen Again Music, D/B/A Universal Tunes; Universal Music – Mgb Na Llc D/B/A Multisongs, D/B/A Universal Music – Careers, D/B/A Universal Music – Mgb Songs; Polygram Publishing, Inc. D/B/A Universal – Polygram International Tunes, Inc., D/B/A Universal – Polygram International Publishing, Inc., D/B/A Universal – Songs Of Polygram International, Inc.; Universal Music – Z Tunes Llc D/B/A New Spring Publishing, D/B/A Universal Music – Brentwood Benson Publishing, D/B/A Universal Music – Brentwood Benson Songs, D/B/A Universal Music – Brentwood Benson Tunes, D/B/A Universal Music – Z Melodies, D/B/A Universal v Anthropic Pbc, 

US

Case: 3:24-cv-03811

Complaint: 18 October 2023

Motion for a preliminary injunction: 16 November 2023

Motion to Dismiss by Anthropic: 22 November 2023

Opposition to motion for preliminary injunction: 16 January 2024

Opposition to motion to dismiss: 22 January 2024

Reply to Response re Motion for Preliminary Injunction: 14 February 2024

Memorandum opinion transferring action to US District Court for the Northern District of California: 24 June 2024

Plaintiff's Motion for Preliminary Injunction: 1 August 2024

Motion to Dismiss filed by Anthropic: 15 August 2024

Opposition/Response re Motion for Preliminary Injunction, filed by Anthropic: 22 August 2024

Response in support of Administrative Motion to consider whether cases should be related, filed by Anthropic: 3 September 2024

Plaintiffs' Opposition to Administrative Motion to consider whether cases should be related: 3 September 2024

Plaintiffs' Opposition to Defendant's Motion to Dismiss: 5 September 2024

Plaintiffs' Reply in Support of Motion for Preliminary Injunction: 12 September 2024

Reply in Support of Motion to Dismiss filed by Anthropic: 17 September 2024

Defendant's Surresponse to Plaintiff's renewed Motion for Preliminary Injunction: 23 October 2024

Summary

A number of music publishers (comprising Concord, Universal and ABKCO) brought an action against Anthropic in the United States District Court for the Middle District of Tennessee Nashville Division (the case has been ordered to be transferred to the United States District Court for the Northern District of California). The complaint has been brought in order to "address the systematic and widespread infringement of their copyrighted song lyrics" alleged to have taken place during the process of Anthropic building and operating its AI models referred to as 'Claude'.  In particular, the complaint notes that when a user prompts Claude to provide the lyrics to a particular song, its response will provide responses that contain all or significant portions of those lyrics. Further, when Clause is requested to write a song about a certain topic, the complaint alleges that this can involve reproduction of the publishers' copyrighted lyrics – for example, when asked to write a song "about the death of Buddy Holly", it responded by generating output that copies directly from the song "American Pie".

The complaint contains claims relating to direct copyright infringement, contributory infringement, vicarious infringement, and DCMA claims (removal of copyright management information).   

In its response to the Plaintiffs' motion for a preliminary injunction, Anthropic argues that the Plaintiffs devised 'special attacks' in order to evade Claude's built-in guardrails and to generate alleged infringements through 'trial and error'.  It also relies upon the use of copyrighted material as inputs as 'fair use'.

Anthropic has filed a Motion to Dismiss a number of the claims (the claims of contributory copyright infringement, vicarious copyright infringement and removal/alteration of copyright management information). It has not sought to dismiss the claim of direct copyright infringement.

Impact

This was the first case involving the music industry, and also the AI tool developer Anthropic. There are a number of websites which currently aggregate and publish music lyrics – however, this is through an existing licensing market by which the publishers license their copyrighted lyrics.

31 January 2024

Raw Story Media, Inc v OpenAI Inc

Raw Story Media, Inc., Alternet Media, Inc., v OpenAI, Inc., OpenAI GP, LLC, OpenAI, LLC, OpenAI Opco  LLC, OpenAI Global LLC, OAI Corporation LLC, OpenAI Holdings, LLC

US

Case: 1:24-cv-01514

Complaint: 28 February 2024

Motion to Dismiss filed by OpenAI: 29 April 2024

Memo in opposition to Motion to Dismiss: 13 May 2024

Reply to Memo in opposition to Motion to Dismiss: 20 May 2024 

 

Summary

This complaint, which has been brought by two news organisations in the US District Court Southern District of New York, is unusual because it does not include claims for copyright infringement. Instead, it alleges violations of the Digital Millennium Copyright Act in that thousands of the Plaintiffs' works were included in training sets with the author, title, and copyright infringement removed.

Impact

Presumably, copyright infringement claims have not been included, because the works in question are perhaps not registered.

26 January 2024

Thaler v Perlmutter

Stephen Phaler v Shira Perlmutter (in official capacity as Register of Copyrights and Director of the United States Copyright Office)

US

USCA Case #23-5233 (on appeal from Case: 1:22-cv-01564)

Complaint: 2 June 2022 (corrected 3 June 2022)

Answer: 26 September 2022

Plaintiff's motion for summary judgment: 10 January 2023

Defendants' response to Plaintiff’s motion for summary judgment and cross-motion for summary judgment: 7 February 2023

Plaintiff’s combined opposition to Defendants' motion for summary judgment and reply in support of Plaintiff’s motion for summary judgment: 7 March 2023

Defendants' reply to motion for summary judgment: 5 April 2023

Order denying Plaintiff's motion for summary judgment and granting Defendants' cross-motion for summary judgment: 18 August 2023   

Notice of Appeal to the US Court of Appeals for the District of Columbia Circuit: 11 October 2023

Appellant brief: 22 January 2024

Appellee Brief filed by Shira Perlmutter and USCO: 6 March 2024

Appellant Reply Brief filed by Stephen Thaler: 10 April 2024

Summary

This case concerns whether copyright can be registered in a creative work made by artificial intelligence – specifically a piece called 'A Recent Entrance to Paradise' which was created autonomously by an AI tool (the AI tool, Creativity Machine, was created by Dr Thaler who listed the system as the work's creator and himself as the 'Copyright Claimant' as 'a work-for-hire to the owner of the Creativity Machine').

The work was denied registration by the US Copyright Office on the basis there was no human author to support a claim to copyright registration. The proceedings in the US District Court for the District of Columbia seek to overturn the USCO refusal to register. The case was therefore a judicial review hearing of the Copyright Office's decision as a final agency decision.

Following cross motions for summary judgment, on 18 August 2023, Judge Beryl A. Howell issued an Order (and accompanying Memorandum Opinion) dismissing the Plaintiff's motion for summary judgment and granting the Defendants' cross-motion for summary judgment.

The Judge concluded that the Registrar had not acted arbitrarily or capriciously in reaching its conclusion that the copyright registration should be denied.    

Dr Thaler filed a Notice of Appeal to the US Court of Appeals for the District of Columbia Circuit. In its Reply Brief, the US Copyright Office asserts that human authorship is a basic requisite to obtain copyright protection, based on a straightforward application of the statutory text, history and precedent.  The Brief argues that the Copyright Act's plain text and structure establish a human authorship requirement. In terms of precedent, since the 19th century, the Supreme Court has recognised human creativity as the touchstone of authorship. It further argues that Dr Thaler has offered no sound reason to depart from these 'bedrock principles'.

Oral argument was heard by the US Court of Appeals for the DC Circuit on 19 September 2024.

Impact

Unusually, the question here is purely a legal one: are AI-generated works (created autonomously without any human input) copyrightable?

Thaler's argument is that AI generated works deserve copyright protection as a matter of policy. The Judge said that "copyright has never stretched so far, however, as to protect works generated by new forms of technology absent any guiding human hand … human authorship is a bedrock requirement of copyright".

The position on whether content created by AI generators is protectable differs from country to country (as noted below re the position in the UK as compared to the US). We have written about this here

See below also for the US Copyright Office Statement of practice in relation to works containing material generated by AI, which is to the effect that only the human created parts of a generative AI work are protected by copyright.

It appears that in presenting argument to the Court, the Plaintiff implied a level of human involvement in the creation of the work, that was not in accordance with the administrative record before the Copyright Office which was to the effect that the work had been generated by the AI system autonomously and that he had played no role in its creation.

Legislative and policy developments

15 April 2024

USCO Notice of inquiry

US

Notice of inquiry and request for comments: 30 August 2023 (deadline for comments: extended to 6 December 2023)

Copyright and AI Report, Part 1: Digital Replicas: July 2024

Summary

As part of its study of the copyright law and policy issues raised by AI systems, the USCO sought written comments from stakeholders on a number of questions. It had received over 10,000 comments by December 2023. The questions cover the following areas:

  1. The use of copyrighted works to train AI models – the USCO notes that there is disagreement about whether or when the use of copyrighted works to develop datasets is infringing. It therefore seeks information about the collection and curation of AI datasets, how they are used to train AI models, the sources of materials and whether permission by / compensation for copyright owners should be required.
  2. The copyrightability of material generated using AI systems – the USCO seeks comment on the proper scope of copyright protection for material created using generative AI. It believes that the law in the US is clear that protection is limited to works of human authorship but notes that there are questions over where and how to draw the line between human creation and AI-generated content. For example, a human's use of a generative AI tool could include sufficient control over the technology – e.g., through selection of training materials, and multiple iterations of prompts – to potentially result in output that is human-authored. The USCO notes that it is working separately to update its registration guidance on works that include AI-generated materials.
  3. Potential liability for infringing works generated using AI systems – the USCO is interested to hear how copyright liability principles could apply to material created by generative AI systems.  For example, if an output is found to be substantially similar to a copyrighted work that was part of the training dataset, and the use does not qualify as fair use, how should liability be apportioned between the user and the developer?
  4. Issues related to copyright – lastly, as a related issue, the USCO is also interested to hear about issues relating to AI-generated materials that feature the names of likeness, including vocal likeness, of a particular person; and also in relation to AI systems that produce visual works 'in the style' of a specific artist.

In July 2024, the USCO published Part 1 of its Report on Copyright and Artificial Intelligence, focusing on Digital Replicas (also called 'deepfakes').  Based on the input received, the USCO has concluded that a new federal law is needed to deal with unauthorised digital replicas, as existing laws do not provide sufficient legal redress. This would cover all individuals, not just celebrities. However, whilst the paper also notes that creators have concerns over AI outputs that deliberately imitate an artist's style, it does not recommend including style in the coverage of the new legislation at this time.    

Separately, a No Fakes Bill (Nurture Originals, Foster Art and Keep Entertainment Safe Bill) has also been proposed in the US Senate. The No Fakes Bill also proposes to enact federal protection for the voice and visual likeness of individuals. The Bill is endorsed by a number of associations representing performers and rights holders, and from within the creative community.

Impact

The issues raised in the Notice are wide-ranging and some are before the Courts for determination. One key issue to resolve is whether the use of AI in generating works could be regarded as akin to a tool like a typewriter in creating a manuscript. Using a typewriter does not result in the manuscript being uncopyrightable in the same way as using Photoshop does not result in a photo taken by a photographer being uncopyrightable. This is the approach that GitHub takes in respect of its Copilot service (for example) where it notes that "Copilot is a tool, like a compiler or pen" and, as a result, its position is that the code produced from GitHub Copilot's should belong to the individual who used the tool. However, again, the legal position as to authorship/ownership is not so clear-cut. Whilst GitHub has no interest in owning Copilot-generated source code that is incorporated into a developer's works, it's not clear whether the terms in Copilot's terms of use effectively assign IP rights to the developer. It is also not clear whether there could be any instances where the use of extensive and carefully worded prompts could ever result in someone being able to claim copyright in the material generated by an AI tool by claiming that the author has ultimate creative control over the work. The USCO had previously considered this in its Statement of Practice. These are just a few issues on which clarity is needed.

11 April 2024

The Generative AI Copyright Disclosure Bill

US

Introduced by Representative Adam Schiff: 9 April 2024

Summary

Introduced by Democratic Representative Adam Schiff, The Generative AI Copyright Disclosure Act would require a notice to be submitted to the Register of Copyrights prior to a new generative AI system being released, providing information on all copyrighted works used in building or altering the training dataset. It would also apply retroactively to existing genAI systems.

Impact

The Bill has attracted widespread support from across the creative community including from industry associations and Unions such as the Recording Industry Association of America, Copyright Clearance Center, Directors Guild of America, Authors Guild, National Association of Voice Actors, Concept Art Association, Professional Photographers of America, Screen Actors Guild-American Federation of Television and Radio Artists, Writers Guild of America West, Writers Guild of America East, American Society of Composers, Authors and Publishers, American Society for Collective Rights Licensing, International Alliance of Theatrical Stage Employees, Society of Composers and Lyricists, National Music Publishers Association, Recording Academy, Nashville Songwriters Association International, Songwriters of North America, Black Music Action Coalition, Music Artist Coalition, Human Artistry Campaign, and the American Association of Independent Music.

12 February 2024

UK approach to text and data mining

UK

UKIPO Code of Practice: On 6 February 2024, the UK Government confirmed it had not been possible to reach an agreement on a voluntary Code of Practice

Summary

In 2021, the UK Intellectual Property Office (UKIPO) consulted on potential changes to the UK's IP framework as a result of AI developments (importantly, this was before the increased levels of interest following the launch of ChatGPT etc).

In particular, a number of policy options were considered relating to the making of copies for the purposes of text and data mining (TDM), a crucial tool in the development and training of AI tools. Currently, an exception is in place under UK copyright law to allow copying for the purposes of TDM, but only where it is for the purpose of non-commercial research, and only where the researcher has lawful access to the works.

Alongside retaining the current exception, or simply improving the licensing environment for relevant works, the consultation sought views on three alternative options:

  • Extend the TDM exception to cover commercial research.  
  • Adopt a TDM exception for any use, with a right-holder opt-out – modelled on the recent TDM exception introduced in the EU. This would provide rights holders with the right to opt-out individual works, sets of works, or all of their works if they do not want them to be mined.
  • Adopt a TDM exception for any use, with no right-holder opt-out – similar to an exception in Japan for information analysis, and also in Singapore.

In June 2022, the UKIPO published the then Government’s response to the consultation, which was in favour of the widest and most liberal of the options under discussion, i.e., a TDM exception for any use, with no right-holder opt-out. Specifically, it was noted that the widening of the exception would ensure that the UK's copyright laws were "among the most innovation-friendly in the world", allowing "all users of data mining technology [to] benefit, with rights holders having safeguards to protect their content". The main safeguard identified for rights holders was the requirement for lawful access.

Following widespread criticism, however, in particular relating to concerns from the creative industries, the then Minister for Science, Research and Innovation confirmed in February 2023 that the proposals would not proceed.

However, following the Sir Patrick Vallance Pro-Innovation Regulation of Technologies Review on Digital Technologies, which called upon the Government to announce a clear policy position, the Conservative Government's response confirmed that it had asked the UKIPO to produce a code of practice. The code of practice was intended to provide balanced and pragmatic guidance to AI firms to access copyright-protected works as an input to their models, whilst ensuring protections are in place on generated outputs to support right holders such as labelling. The Government suggested that an AI firm that committed to the code of practice could expect to have a reasonable licence offered by a rights holder. If a code of practice could not be agreed or adopted, however, legislation may have to be implemented.

In an interim report on governance of AI by the House of Commons Science, Innovation and Technology Committee (dated 31 August 2023), 'the Intellectual Property and Copyright Challenge' was identified as one of the 12 challenges of AI governance. Representatives of the creative industries reported to the Committee that they hoped to reach a mutually beneficial solution with the AI sector, potentially in the form of a licensing framework. Meanwhile, in its report on Connected tech: AI and creative technology (dated 30 August 2023), the House of Commons Culture, Media and Sport Committee welcomed the former Government's rowing back from a broad TDM exception, suggesting that it should proactively support small AI developers, in particular, who may find it difficult to acquire licences, by considering how licensing schemes can be introduced for technical material and how mutually beneficial arrangements can be agreed with rights management organisations and creative industry bodies. Further, it stressed to the Government that it "must work to regain the trust of the creative industries following its abortive attempt to introduce a broad text and data mining exception".

In its response to the House of Commons Culture, Media and Sport Committee's report on AI and the creative industries, the former Government confirmed that it was not proceeding with a wide text and data mining exception and reiterated its commitment to developing a code of practice to "enable the AI and creative sectors to grow in partnership". 

In the report of the House of Lords Communications and Digital Committee on 'Large Language Models and Generative AI' (published 2 February 2024), the Committee noted that the voluntary IPO-led process was welcome and valuable but that debate could not continue indefinitely, and if process remained unresolved by Spring 2024, the Government must set out options and prepare to resolve the dispute definitively, including legislative change if necessary. However, following reports in The Financial Times that the code of practice had been shelved, this was confirmed by the Government in its response to the AI White Paper consultation published on 6 February. 

Impact

Following the change of Government, monitor closely for the new Government's proposals in relation to AI, both generally and in relation to the treatment of copyright works. Whilst the King's Speech made reference to the Government intending to "…seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models", no further information has yet been provided. Given the reference to 'appropriate legislation', we anticipate that there will be further consideration of this issue, and the Government has indicated that it expects to resolve the issue by the end of the year.

8 February 2024

UK approach to copyright protection of computer-generated works

UK

Monitor for developments

Summary

In contrast to the approach adopted in most other countries, copyright is available in the UK to protect computer-generated works (CGWs) where there is no human creator. The author of such a work is deemed to be the person by whom the necessary arrangements for the creation of the work were undertaken, and protection lasts for 50 years from the date when the work was made.

How this applies in relation to content created with generative AI is currently untested in the UK.  In its consultation in 2021, the Government sought to understand whether the current law strikes the right balance in terms of incentivising and rewarding investment in AI creativity. 

Some have criticised the UK provision for being unclear and contradictory – a work, including a CGW, must be original to be protected by copyright, but the test for originality is defined by reference to human authors, and by reference to human traits such as whether it reflects their 'free and expressive choices' and whether it contains their 'stamp of personality'. 

From an economic perspective, meanwhile, it has been argued that providing copyright protection for CGWs is excessive because the incentive argument for copyright does not apply to computers. Further, some argue from a philosophical viewpoint that copyright should be available to protect only human creations, and that granting protection for CGWs devalues the worth of human creativity.

The consultation proposed the following three policy options, with the Government ultimately deciding to adopt the first option of making no change to the existing law at present:

  • Retain the current scheme of protection for CGWs
  • Remove protection for CGWs
  • Introduce a new right of protection for CGWs, with a reduced scope and duration

Impact

Having consulted, the Government decided to make no changes to the law providing copyright protection for CGWs where there is no human author, but said that this was an area that it would keep under review. In particular, it noted that the use of AI in the creation of these works was still in its infancy, and therefore the impact of the law, and any changes to it, could not yet be fully evaluated.

In view of recent developments, it is clear that this policy approach may need to be revisited sooner rather than later.

We discussed this and the comparison with the approach in the US in our article here (and see further below).

Summary

On 12 July 2024, the EU AI Act was published in the Official Journal of the EU. Now that it has been published, the compliance deadlines can be calculated as set out below.

In relation to copyright, the Act contains provisions relating to obligations on general-purpose AI systems around compliance with EU copyright law (including relating to text and data mining and opt-outs under the EU Digital Single Market Copyright Directive) and transparency around content used to train such models (in the form of sufficiently detailed summaries, which will be by reference to a form template to be published by the proposed AI Office). There is also a requirement that certain AI-generated content (essentially 'deep fakes') be labelled as such.

Impact

The Act will enter into force 20 days after publication in the Official Journal (i.e., on 1 August 2024), and be fully applicable 24 months after its entry into force, i.e., on 2 August 2026 (though certain provisions will be applicable sooner, and others at 36 months). There are staggered dates for when different parts of the Act will take effect:

  • 6 months after coming into force, provisions concerning banned AI practices take effect (i.e. 2 February 2025)
  • 1 year after coming into force, provisions on penalties, confidentiality obligations and general-purpose AI take effect (i.e. 2 August 2025)
  • 2 years after coming into force, the remaining provisions take effect (i.e. 2 August 2026)
  • 3 years after coming into force, obligations for high-risk AI systems forming a product (or safety component of a product) regulated by EU product safety legislation apply (i.e. 2 August 2027)
3 January 2024

USCO Statement of Practice

US

USCO Statement of Policy: 10 March 2023

Summary

In March 2023, the US Copyright Office published a Statement of Policy setting out its approach to registration of works containing material generated by AI.

The guidance states that only the human created parts of a generative AI work are protected by copyright. Accordingly, only where a human author arranges AI-generated material in a sufficiently creative way that ‘the resulting work as a whole constitutes an original work of authorship’ or modifies AI-generated content ‘to such a degree that the modifications meet the standard for copyright protection,’ will the human-authored aspects of such works be potentially protected by copyright. 

This statement follows a decision by the USCO on copyright registration for Zarya of the Dawn ('the Work'), an 18-page graphic novel featuring text alongside images created using the AI platform Midjourney. Originally, the USCO issued a copyright registration for the graphic novel before undertaking investigations which showed that the artist had used Midjourney to create the images. Following this investigation (which included viewing the artist’s social media), the USCO cancelled the original certificate and issued a new one covering only the text as well as the selection, coordination, and arrangement of the Work’s written and visual elements. In reaching this conclusion, the USCO deemed that the artist’s editing of some of the images was not sufficiently creative to be entitled to copyright as a derivative work.

Impact

The boundaries drawn by the USCO in relation to works created by generative AI confirm there are challenges for those that wish to obtain protection for such works. Developments should continue to be tracked, including in relation to ongoing litigation (see above).

Subscribe to our mailings

Keep up to date with news, publications and briefings

Subscribe
How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

I'm a client

I'm looking for advice

Something else