The war between copyright holders and AI companies has been raging for years at this point. AI’s seemingly magical ability to synthesize human-like text and lifelike videos Stephen Hawking demands a huge appetite for data from books, films, music, and even your social media posts.
AI’s appetite for raw data has been a huge legal sticking point, to put it mildly. Yet as Politico notes in a recent article, legal experts are now growing incensed not just about what’s going into the chatbots — but what’s coming out.
“Courts might accept copying for transformative learning, but they may be less forgiving when AI models generate recognizable… images where infringement risk is likely higher,” copyright law scholar Abdi Aidid told the outlet.
That “transformative learning” has been key to the AI industry’s legal defense so far. And it works — tech companies like Meta and Anthropic have managed to skate by with minor slaps on the wrist by arguing that their wholesale commercial utilization of copyrighted books and other media followed the tenants of fair use, which allows for certain remixes of intellectual property.
Now that OpenAI’s spewing out perfect recreations of America’s most beloved cartoon characters, the landscape has changed.
When it comes to intellectual property law, Politico notes, judges are often much more protective over visual content than text-based media. For one thing, images and video are viewed as being much more expressive than text, a limiting factor when it comes to fair use. For another, these aren’t puny little novelists we’re talking about — cartoons are big business. SpongeBob, a character who’s featured heavy in AI-generated meth lab scenes lately, has generated $16 billion in retail sales since his franchise’s launch 26 years ago.
That’s some serious commerce we’re talking about — something a judge isn’t likely to overlook.
“When you use works to train a model, you’re basically using them not for the expression… but you’re using them as data,” Pamela Samuelson, a copyright professor at UC Berkeley, told Politico. “There’s something much more immediately expressive about graphical works, particularly characters.”
How this ultimately shakes out will depend on someone taking OpenAI to task, something which is sure to test both current and historic copyright law. As Samuelson points out, users of this software might also be on the hook, depending on some old Betamax litigation dating back to 1984.
For now, the legal scholar says that “whether the generative AI system developer is liable for infringement [is] a kind of untested question at this point.” Still, all it takes is one angry publisher to pull the trigger.
More on OpenAI: Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown
The post OpenAI’s Copyright Situation Appears to Be Putting It in Huge Danger appeared first on Futurism.


