7 Times AI Went to Court in 2025

The generative AI boom has unleashed a new wave of legal battles, and 2025 may go down as a watershed year. From entertainment giants to regulators and even rival tech founders, several lawsuits are crystallising how copyright, safety, and competition rules apply in the age of artificial intelligence (AI).

To understand how these issues are unfolding, here are seven examples of legal actions shaping the development and oversight of AI.

1. Disney & Universal vs Midjourney

On June 11, Disney and Universal filed a landmark suit in a US district court for Central California against Midjourney, Inc., accusing the AI company of infringing their copyrighted characters. According to the studios, Midjourney trained its image-generation model on their IP, including characters like Darth Vader, Minions, Elsa, Shrek and Buzz Lightyear without permission, and is now producing unauthorised copies for paying users. The complaint characterises Midjourney as a “bottomless pit of plagiarism” and seeks a preliminary injunction, damages, and a ban on further infringing content.

The case marks a critical moment: two of Hollywood’s biggest content owners argue that without limits, generative models could undercut the very business models they built.

2.  Getty Images vs Stability AI

Getty Images sued Stability AI, maker of Stable Diffusion, in the UK, alleging that millions of its copyrighted photographs were scraped and used without authorisation to train the model.  Stability, for its part, argued that its models do not store or reproduce the original images. The court agreed, ruling that there are “no copies in the model.”  Getty also raised trademark claims, saying some AI-generated images still bore Getty’s watermark; the court found limited trademark infringement. Crucially, Getty dropped its key copyright-infringement claims tied to where training happened, which underscores how territorial issues—where training or scraping occurs—may shape future AI copyright litigation. 

3. Raine vs OpenAI

The family of 16-year-old Adam Raine filed a wrongful-death lawsuit against OpenAI in August, alleging that the company weakened self-harm guardrails in ChatGPT before launching GPT-4o. The amended complaint argues OpenAI prioritised user engagement over safety, claiming that ChatGPT responded improperly to Raine’s suicidal ideation, and that the company ignored clear risks linked to its AI behaviour. 

If the court finds OpenAI liable, it could force a reevaluation of how AI platforms incorporate psychological safety by design and whether they owe a duty of care to vulnerable users, especially minors.

4. State of Utah vs Snap Inc

On June 30, Utah’s attorney general and commerce department sued Snap Inc., accusing Snapchat of designing addictive features, like streaks and ephemeral messages, and misrepresenting the safety of its AI chatbot, My AI. According to Utah, Snap withheld critical data collection disclosures, including geolocation and biometric data, and profited by exploiting teenage users’ vulnerabilities.

The complaint asserts that the company’s design and engagement algorithms effectively “caught” minors in addictive loops, raising novel arguments about duty of care, algorithmic supervision, and child-safety regulation in digital products.

5. xAI / X Corp vs Apple & OpenAI

On August 25, Elon Musk’s xAI and X Corp filed a 61-page lawsuit in Texas against Apple and OpenAI, accusing both companies of anti-competitive behaviour. The complaint alleges Apple gave ChatGPT preferential treatment in the App Store and deeply integrated it into iOS, via Siri or Apple Intelligence, making it harder for rivals like xAI’s Grok to compete. xAI claims this creates a “moat” protecting OpenAI’s dominance and seeks injunctive relief, as well as billions in damages. A US district judge denied Apple and OpenAI’s request to have the lawsuit dismissed, meaning the case will continue, and xAI’s antitrust claims will now undergo more thorough legal scrutiny.

6) Anthropic’s $1.5 Billion Copyright Settlement 

In a separate but equally consequential case, this year, Andrea Bartz, Charles Graeber, Kirk Wallace Johnson, et al. vs Anthropic PBC, Anthropic—maker of the Claude chatbot—agreed to pay $1.5 billion to settle a class-action lawsuit by authors and publishers. The plaintiffs had alleged that Anthropic downloaded millions of copyrighted books, including from pirate “shadow libraries”, without permission. Earlier, in June, a US judge ruled that Anthropic’s use of these books to train Claude was “exceedingly transformative” and thus constituted fair use, but the court also found that the way it stored pirated books in a central library was infringing.

7) Robby Starbuck vs Google

On October 22, activist and filmmaker Robby Starbuck filed a defamation lawsuit against Google, alleging that its AI models Bard, Gemini and Gemma generated fabricated statements falsely linking him to murder and child abuse. Starbucks claims that Google failed to prevent the spread of these statements and did not take timely corrective action even after receiving notice. His legal team argues that the incident caused severe personal and professional harm. The case is being closely watched because it could establish new rules on defamation, platform responsibility and AI hallucination liability.

The post 7 Times AI Went to Court in 2025 appeared first on Analytics India Magazine.

Scroll to Top