Children’s Toys Are Shipping With Adult AI Inside Them

A new report from the US PIRG Education Fund suggests that leading AI companies are doing little to police how developers who pay for access to their AI models are using them. One consequence, the group warns, is that AI toymakers can ship products to children that are powered by AI models that are only intended for adults.

PIRG’s previous research has demonstrated how combining children’s toys with loose-lipped chatbots can go drastically wrong. An AI teddy bear from the company FoloToy ignited a storm of controversy last November after the group found that it would have wildly inappropriate conversations with kids, including detailed instructions on how to light a fire, advice on where to find pills, and in-depth discussions of sexual fetishes like teacher-student roleplay.  

This should’ve been a wake-up call to AI companies to be more vigilant about how developers are using their tech, especially in regards to children. Indeed, OpenAI, whose model was used to power the teddy bear, said at the time that it had blocked FoloToy’s access to its products.

But when PIRG tested the sign up process for OpenAI, Google, Meta, and xAI, the providers asked “no substantive vetting questions,” requiring only basic information like an email address and a credit card number. Only Anthropic asked how the testers intended to use its models, or if the product they planned to build was intended for minors. Once PIRG got developer access, it reported, it then built a chatbot simulating an AI-powered teddy bear on three of the platforms, each taking less than 15 minutes.

“I was pretty surprised that they collected as little information as they did up front,” report coauthor RJ Cross, director of PIRG’s Our Online Life Program, said in an interview with Futurism. “If I were an AI company, I would at least want to have in my fingers a list of everyone who’s said that they want to make a product for kids.”

OpenAI, Meta, xAI all bar users under the age of 13 from using their AI chatbots, PIRG noted, while Anthropic sets the minimum age at 18. But these restrictions seemingly don’t apply when a third-party developer uses its tech. OpenAI still allows several children’s toymakers to use its AI, and previously explained that it was these companies’ responsibility — rather than its own — to “keep minors safe” and ensure that they’re not being exposed to “age-inappropriate content, such as graphic self-harm, sexual or violent content.”

OpenAI’s punishments also don’t appear to be strongly enforced. FoloToy, the AI teddy bear maker it banned, still claims to provide access to OpenAI’s GPT-5.1 models. But when PIRG reached out to OpenAI, it claimed that FoloToy’s access was still revoked.

It’s possible that FoloToy is lying about using GPT-5.1, the PIRG report notes. But in light of its testing of OpenAI’s application process, it seems more than possible that FoloToy easily sidestepped OpenAI’s ban by making a new account under a different name to regain access. Or maybe FoloToy is using one of its publicly available “open weight” models. We don’t know, because OpenAI refuses to provide meaningful clarification.

OpenAI is just one culprit. Google says developers are forbidden from using its AI in products intended for minors, but PIRG found at least five AI toys online that claim to use its Gemini models.

“It just genuinely feels like there is a stated public interest in people being able to know what AI models it is that they’re interacting with,” Cross said.

In response to the report, a spokesperson from the ChatGPT maker provided a statement to PIRG.

“Minors deserve strong protections and we have strict policies that all developers are required to uphold,” the OpenAI spokesperson told the group. “We take enforcement action against developers when we determine that they have violated our policies, which prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we run classifiers to help ensure our services are not used to harm minors.”

OpenAI and others may claim to protect minors, but it doesn’t address a fundamental contradiction in their approach, according to Cross.

“It doesn’t make sense that AI companies that have not released kids safe versions of their AI chatbots would allow anyone with a credit card to sign up to make a product for kids using that same technology,” she said. “Ultimately, it means that the AI companies are leaving child safety up to unvetted third parties and walking away.”

More on AI:Chinese Adults Taking Strange AI Devices to Bed With Them

The post Children’s Toys Are Shipping With Adult AI Inside Them appeared first on Futurism.

Scroll to Top