Home > News

Is your favorite AI under fire? OpenAI, Nvidia caught in the copyright storm

AI is being tested by the law, and the biggest players are being targeted
Last Updated on March 20, 2024
A gavel on a circuit board, symbolizing the intersection of law and technology, particularly in AI copyright lawsuits.
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

Imagine that your favorite AI tool suddenly becomes slower, less creative, and twice as expensive. It seems unlikely, but as lawsuits against AI companies alleging copyright infringement pile up, it could quickly become a reality.

There are 23 active lawsuits against AI companies. Almost half (10) are against OpenAI, the rest against Microsoft, Stability AI, and a handful of others – including Meta, Anthropic, Runway, and (most recently) AI chipmaker Nvidia. Nvidia entered the frame on Friday, March 8th, when three authors filed a potential class action lawsuit in a San Francisco court, claiming Nvidia used copyrighted books without permission to train its generative AI platform NeMo.

With a barrage of AI copyright lawsuits underway, the judgments rendered will likely shape AI’s future capabilities in ways that will profoundly affect users.

Increased costs:

If AI companies get hit with substantial fines or licensing fees, they’ll likely pass these costs on to users. Higher subscription costs will limit access to those who can pay the premium.

A drop in quality and versatility:

For example, tighter restrictions on the use of copyrighted data to train AI models could decrease the quality and reliability of AI-generated content. While it’s unlikely a court would rule to dismantle an existing AI product, your future AI might start to produce less innovative results. 

Legal Risks for Users:

As copyright laws evolve, users may face increased legal risks when creating or sharing AI-assisted work. The risk of a copyright claim could cast a negative halo on the industry, making users hesitant to experiment with AI tools for fear of legal repercussions.

Stifled Innovation:

Legal battles are expensive, and these costs might force AI companies to allocate fewer resources to research and development, slowing innovation, and limiting the potential for AI to solve problems and create opportunities.

The Root of the Problem

Patent laws weren’t designed for the AI era. They didn’t anticipate AI-generated content. Intellectual property law doesn’t even recognize non-human creators, leaving current laws outdated and unable to handle the complexities of this new technology.

Every week raises new issues. Whether it’s fake photos and videos targeting celebrities like Taylor Swift, or robocalls to voters while impersonating Joe Biden. AI’s ability to take existing data and transform it into new forms is its killer app and Achilles heel. 

The Debate: Fair use or Infringement?

The Criticism:

Companies like the New York Times and Universal Music Group (UMG) claim AI models are trained on copyrighted data. Whether it’s The New York Times suing OpenAI for alleged verbatim use of its journalism, or Antropic’s Claude facing suit from UMG over copyrighted song lyrics, it’s hard to deny that AI systems rely heavily on copyrighted materials. 

The UK Publishers Association wants compensation, consent, and attribution. It has called on the UK government to enact legislation to support this outcome. However, as Sony’s Head of AI Ethics Susie Xiang argues, enabling copyright consent systems that track the permissions of billions is no easy feat.

The Counter-Argument: AI proponents like OpenAI argue that training models on copyrighted data are fair use. Indeed, OpenAI defended this stance in a January blog post, arguing that they collaborate with news organizations, offer an opt-out, and that wholesale regurgitation of articles is a rare bug, not an intentional feature.

Looking Ahead: Possible Solutions

So, how do we resolve these seemingly intractable differences? A few solutions have emerged: 

AI companies agree to pay a license fee to copyright holders: This establishes a fair compensation scheme, allowing AI companies to continue to innovate. The challenge lies in determining what ‘fair’ compensation looks like and ensuring payments benefit all parties.

AI systems train on synthetic data: While this bypasses the copyright issue it raises questions about the quality and reliability of the data. Synthetic data is useful, but its utility is hampered by bias and the potential for error.

So What Now?

The outcome of these landmark cases will shape the tools we use to build the future and how we do so. Billions of dollars and humanity’s creative future are on the line. The AI genie is already out of the bottle—there’s no putting it back. So, instead, Let’s focus on building a more creative future instead of wasting it on infighting that threatens it.

John is a seasoned writer and creative media producer who explores the intersection of technology and human identity. He joined PC Guide in 2024.