Toy-maker Mattel accused of planning “reckless” AI social experiment on kids

https://arstechnica.com/tech-policy/2025/06/mattel-sparks-fear-that-planned-chatgpt-fueled-toys-will-warp-kids/

Ashley Belanger Jun 17, 2025 · 5 mins read
Toy-maker Mattel accused of planning “reckless” AI social experiment on kids
Share this

After Mattel and OpenAI announced a partnership that would result in an AI product marketed to kids, a consumer rights advocacy group is warning that the collaboration may endanger children.

It remains unclear what shape Mattel's first AI product will take. However, on Tuesday, Public Citizen co-President Robert Weissman issued a statement urging more transparency so that parents can prepare for potential risks. Weissman is particularly concerned that ChatGPT-fueled toys could hurt kids in unknown ways.

"Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children," Weissman said. "It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."

One anonymous source told Axios that Mattel's plans for the AI partnership are still in "early stages," so perhaps more will be revealed as Mattel gears up for its first launch. That source suggested that the first product would not be marketed to kids under 13, which some think suggests that Mattel may recognize that exposing younger kids to AI is possibly a step too far at this stage. But more likely, it's due to OpenAI age restrictions on its API, prohibiting use under 13.

Parents shouldn't be blindsided by new products, Weissman suggested, and some red lines should be drawn before any toy hits the shelves. Perhaps most urgently, "Mattel should announce immediately that it will not incorporate AI technology into children’s toys," Weissman said. "Children do not have the cognitive capacity to distinguish fully between reality and play."

"Mattel should not leverage its trust with parents to conduct a reckless social experiment on our children by selling toys that incorporate AI," Weissman said.

OpenAI declined to comment. Mattel did not immediately respond to Ars' request for comment.

OpenAI and Mattel defend partnership

In Mattel's press release, the toy maker behind brands like Barbie and Hot Wheels remained vague, saying only that the OpenAI deal would "support AI-powered products and experiences based on Mattel’s brands." The company's chief franchise officer, Josh Silverman, said the collaboration would enable Mattel to "reimagine new forms of play," teasing that the first release would be announced by the end of this year. Axios' source suggested it likely wouldn't be sold until 2026.

OpenAI's statement also glossed over the details, promising "to bring a new dimension of AI-powered innovation and magic to Mattel’s iconic brands."

Both companies emphasized that safety, privacy, and age-appropriateness would be front of mind in designing Mattel's AI products. OpenAI further claimed that kids would only be exposed to positive experiences through the collaboration, due to Mattel's experience creating kid-friendly products.

"By tapping into OpenAI’s AI capabilities, Mattel aims to reimagine how fans can experience and interact with its cherished brands, with careful consideration to ensure positive, enriching experiences," OpenAI said.

Critics fear Mattel is moving too fast

Critics on LinkedIn have noted that while the partnership could have positive impacts on kids—like enhancing learning or inclusivity—AI toys also carry a wide variety of potential risks that families should carefully weigh before buying into any new hyped product.

In a detailed post, one tech executive, Varundeep Kaur, warned that parents should be thinking about privacy since AI toys may process their kids' "voice data, behavioral patterns, and personal preferences." He suggested Mattel may have set its first AI product's age limit at 13 to avoid running afoul of laws that are stricter when it comes to kids' data. OpenAI has said the collaboration will comply with all safety and privacy regulations.

Parents should also keep in mind the bias behind the large language models that fuel AI tools like ChatGPT, Kaur said, which "might reproduce subtle stereotypes, biased narratives, or culturally inappropriate content, even unintentionally," that could skew kids' perspectives or social development.

Most obviously, AI models are still prone to hallucination, Kaur noted. And while Mattel's AI toys are "unlikely to cause physical harm," toys giving "inappropriate or bizarre responses" could "be confusing or even unsettling for a child," he said.

For parents, the emotional ties kids make with AI toys will also need to be monitored, especially since chatbot outputs can be unpredictable. Another LinkedIn user, Adam Dodge—founder of a digital safety company preventing cyber abuse, called EndTab—pointed to a lawsuit where a grieving mom alleged her son committed suicide after interacting with hyper-realistic chatbots.

Those bots encouraged self-harm and engaged her son in sexualized chats, and Dodge suggested that toy makers are similarly "wading into dangerous new waters with AI" that could possibly "communicate dangerous, sexualized, and harmful responses that put kids at risk."

"This was inevitable—but wow does it make me cringe," Dodge wrote, noting that Mattel's plan to announce its first product this year seems "fast."

Dodge said that right now, Mattel and OpenAI are "saying the right things" by emphasizing safety, privacy, and security, but more transparency is needed before parents can rest assured that AI toys are safe.

AI is "unpredictable, sycophantic, and addictive," Dodge warned. "I don't want to be posting a year from now about how a Hot Wheels car encouraged self-harm or that children are in committed romantic relationships with their AI Barbies."

Kaur agreed that it's in Mattel's best interest to give parents more information, since "public trust will be vital for widespread adoption." He recommended that the toy maker submit to independent audits and provide parental controls to reassure parents, as well as clearly outline how data is used, where it's stored, who has access to it, and what will happen if their kids' data is breached.

For Mattel, a bigger legal threat forcing responsible design and appropriate content filtering may come from any unintentional copyright issues arising from using OpenAI models trained on a wide range of intellectual property. Hollywood studios recently sued one AI company for allowing users to generate images of their most popular characters and would likely be just as litigious defending against AI toys emulating their characters.