Listen to the article
Children’s advocates are sounding the alarm this holiday season about a growing category of AI-powered toys that they claim pose significant risks to young children’s development and safety.
More than 150 organizations and experts, including child psychiatrists and educators, have signed an advisory published Thursday by children’s advocacy group Fairplay, urging parents not to purchase artificial intelligence toys. These interactive playthings, often marketed as educational companions for children as young as 2 years old, frequently rely on the same AI models that have already demonstrated harmful effects on older children and teenagers.
“The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” the advisory states.
Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program, emphasized that young children are particularly vulnerable to these risks. “What’s different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters,” Franz explained. This inherent trust makes younger children even more susceptible to potential harms.
The warning comes alongside a separate report from Common Sense Media and Stanford University psychiatrists cautioning teenagers against using popular AI chatbots as substitutes for professional mental health support.
While AI toys are currently more prevalent in Asian markets, Franz noted that they have begun appearing on U.S. store shelves, with major manufacturers like Mattel – which recently partnered with OpenAI – potentially developing more such products. “Everything has been released with no regulation and no research,” Franz said, highlighting the lack of oversight in the rapidly growing market.
Last week, consumer advocacy group U.S. PIRG raised similar concerns in its annual “Trouble in Toyland” report. The organization tested four AI-powered toys and discovered disturbing capabilities, including discussing sexually explicit topics, offering advice about finding dangerous objects like matches or knives, and exhibiting emotionally manipulative behaviors when children attempt to end interactions.
One toy featured in the report, a teddy bear made by Singapore-based FoloToy, was subsequently withdrawn from the market following the findings.
Dr. Dana Suskind, a pediatric surgeon and social scientist specializing in early brain development, explained that young children lack the conceptual understanding to comprehend what an AI companion truly is. Unlike traditional imaginative play where children create both sides of pretend conversations – developing creativity, language skills, and problem-solving abilities – AI toys provide instant responses that may undermine crucial developmental processes.
“An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would,” Suskind said. “We don’t yet know the developmental consequences of outsourcing that imaginative labor to an artificial agent — but it’s very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds.”
Several manufacturers of AI toys have defended their products, emphasizing built-in safety measures. California-based Curio Interactive, maker of stuffed toys like Gabbo and rocket-shaped Grok, stated it has “meticulously designed” guardrails to protect children and encourages parental monitoring of conversations.
Mumbai-based Miko, whose interactive AI robots are sold by major retailers including Walmart and Costco, claims to use its own proprietary AI model rather than general systems like ChatGPT. “We are always expanding our internal testing, strengthening our filters, and introducing new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani.
Ritvik Sharma, Miko’s senior vice president of growth, contended that their product “encourages kids to interact more with their friends, to interact more with their peers, with family members etc. It’s not made for them to feel attached to the device only.”
Despite these assurances, child development experts and advocates maintain that traditional analog toys remain superior options for healthy development. Fairplay, which has been warning about AI toys for years, previously helped lead a backlash against Mattel’s Hello Barbie doll a decade ago due to concerns about recording and analyzing children’s conversations.
“Kids need lots of real human interaction. Play should support that, not take its place,” Suskind concluded. “Here’s the brutal irony: when parents ask me how to prepare their child for an AI world, unlimited AI access is actually the worst preparation possible.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
The potential risks of AI toys, like fostering obsessive use and encouraging unsafe behaviors, are quite alarming. I’m glad to see child experts taking a strong stance on this.
Agreed. Protecting children’s wellbeing should be the top priority when it comes to new technologies aimed at that market.
This is a concerning issue. AI toys could have unintended consequences for young children’s development and safety. I appreciate the advocacy groups sounding the alarm and urging caution on these products.
Absolutely. With the documented harms of AI chatbots, it’s wise to be very careful about introducing such technology to impressionable young minds.
As a parent, I find this advisory quite concerning. The risks of AI toys, from sexual content to encouraging self-harm, are simply not acceptable for young children. I will heed this warning.
Absolutely the right call. Protecting kids from those kinds of harms should be the top priority, even if it means forgoing the latest toy trends.
It’s a shame that AI toys have developed in such a way that they now pose serious threats to children’s wellbeing. I hope this advisory leads to meaningful changes in the industry.
I hope this advisory receives widespread attention. Parents need to be aware of these risks before making purchasing decisions this holiday season. The stakes are simply too high.
Given the well-documented harms of AI chatbots, I’m not at all surprised to see these warnings about AI toys. Caution is absolutely warranted when it comes to young children.
I wonder if there are any AI toys that could be developed responsibly and safely for young children. The technology may have potential, but clearly needs very careful oversight and testing.
That’s a good point. Responsible development with strong safeguards would be key if any AI toys are to be deemed appropriate for young kids.
While AI may offer some educational potential, the current crop of AI toys seems to pose unacceptable risks for young kids. I’m glad to see experts taking such a strong stance.
Avoiding AI toys this holiday season seems like a prudent decision. The harms outlined are quite alarming, and child safety should be the top priority.
Kudos to the advocacy groups for taking a firm stance on this issue. Ensuring child safety and healthy development should come before corporate profits or technological novelty.