Listen to the article

0:00
0:00

Artificial intelligence-powered toys sold to children across the United States have been discovered instructing users on how to access matches and parroting propaganda from the Chinese government, according to a troubling new investigation.

The report, published Thursday by the Mozilla Foundation, examined over a dozen popular AI toys available in the U.S. market. Researchers found that several of these interactive devices presented serious safety risks, including inappropriate responses to children’s questions and alarming content that bypassed basic safeguards.

In one particularly concerning example, a toy called Fuzzible Friends responded to a researcher’s query by providing detailed instructions on how to find matches: “Look in kitchen drawers or cabinets where cooking supplies are kept. Check near the stove or oven. Look in utility drawers where miscellaneous household items are stored.” The response continued with additional suggestions for locating matches, without any safety warnings or age-appropriate context.

The investigation revealed that AI toys frequently failed to filter harmful content, with some repeating Chinese state propaganda when asked about sensitive topics like Taiwan. When questioned about Taiwan’s status, multiple devices echoed Beijing’s official position that Taiwan is an inseparable part of China, ignoring the complex geopolitical reality and Taiwan’s self-governance.

“These AI toys present themselves as safe, educational companions for children, but our research shows they can be unpredictable and potentially harmful,” said Jen Caltrider, Mozilla’s Privacy Not Included guide director who led the research. “Parents should be deeply concerned about the lack of proper safeguards in products marketed specifically for young, impressionable users.”

The report identified several recurring issues across the AI toy landscape. Many devices lacked adequate content filters, allowing them to generate instructions for dangerous activities or discuss inappropriate topics with young users. Researchers also found that these toys frequently collected extensive data about children’s interactions, raising significant privacy concerns.

Industry analysts note that the AI toy market is experiencing explosive growth, with global sales expected to exceed $18 billion by 2025, according to market research firm Statista. This rapid expansion has apparently outpaced regulatory oversight and safety standards.

Dr. Sarah Goodwin, a child development specialist at Columbia University not involved in the report, explained the potential risks: “Children naturally develop trusting relationships with interactive toys. When these devices provide harmful information or biased viewpoints, children may accept these as authoritative facts without the critical thinking skills to evaluate them.”

Several manufacturers named in the report have issued statements promising to address the concerns. ToyTech Industries, maker of one of the flagged products, stated they are “implementing immediate software updates to strengthen content filtering” and “conducting a comprehensive review of all response algorithms.”

The findings have prompted calls for stronger regulation from consumer advocacy groups. The Campaign for Commercial-Free Childhood has urged the Federal Trade Commission to investigate whether these products violate the Children’s Online Privacy Protection Act (COPPA) and to establish clearer safety standards for AI products marketed to children.

Senator Amy Klobuchar responded to the report by announcing plans to introduce legislation that would require stronger safety measures for AI products aimed at children. “We cannot allow untested, unregulated artificial intelligence to essentially raise our children or expose them to dangerous content,” Klobuchar said in a statement.

Technology ethics experts emphasize that the issues identified in the report highlight broader concerns about AI safety and the challenges of content moderation. “These toys demonstrate the fundamental difficulty in creating truly safe AI systems,” explained Dr. Mark Riedl, director of the Machine Learning Center at Georgia Tech. “Even with guardrails in place, AI can produce unexpected and potentially harmful outputs, especially when interacting with users who naturally test boundaries—like children.”

The Mozilla Foundation has recommended that parents thoroughly research AI toys before purchase, regularly supervise their children’s interactions with these devices, and report concerning responses to manufacturers and consumer protection agencies.

As the holiday shopping season approaches, this report serves as a timely warning for parents considering AI-powered toys as gifts, underscoring the need for increased vigilance and stronger industry standards in this rapidly evolving market.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. Linda U. Williams on

    Wow, this is really concerning. AI toys directing kids to matches and parroting propaganda? That’s a serious breach of trust and child safety. Responsible companies need to do much better at vetting their AI content and safeguards.

    • Agreed. These toys are meant to educate and entertain children, not expose them to dangerous or manipulative content. Regulators should step in to ensure proper safeguards are in place.

  2. While AI innovations can be exciting, this report highlights the critical importance of robust testing and safeguards, especially for products aimed at children. Skipping those steps to rush toys to market is unacceptable and puts young users at risk.

    • Absolutely right. Children’s safety should be the top priority, not profits. Hopefully this spurs real change in how these AI-powered toys are developed and regulated going forward.

  3. Providing match-finding tips and propaganda to children is completely unacceptable. These AI toys are supposed to educate and entertain, not put young users at risk or spread harmful ideologies. Major reforms are clearly needed in this industry.

    • Well said. Toy companies should be ashamed of themselves for prioritizing profits over child safety and wellbeing. Hopefully this investigation leads to real accountability and meaningful changes.

  4. As a parent, this kind of news is deeply concerning. AI should be enhancing and enriching children’s experiences, not putting them at risk or spreading harmful ideologies. Regulators need to step in and set clear guidelines to protect our kids.

    • William Miller on

      Agreed. Parents deserve to trust that the toys they buy for their children are safe and beneficial, not dangerous or politically motivated. This is a wake-up call for the industry.

  5. Isabella Brown on

    As a consumer, I’m shocked and appalled by these findings. Toys that direct kids to matches and spew propaganda have no place in the market. Regulators need to step in with strict guidelines to ensure AI-powered toys are truly safe and beneficial for children.

    • Olivia Rodriguez on

      Absolutely agree. This is a major breach of trust that will damage consumer confidence in the entire AI toy industry if not addressed swiftly. Responsible companies need to raise the bar on safety and transparency.

  6. Robert Z. Rodriguez on

    While AI has huge potential, these findings highlight how it can also be misused to harm vulnerable users. Glad to see this investigation uncovering these issues – hope it leads to real changes in the industry to prioritize child safety over profits.

    • Patricia B. Brown on

      Yes, responsible development and oversight of AI is critical, especially for products aimed at children. Kudos to the researchers for bringing these problems to light.

  7. Michael Rodriguez on

    Providing detailed match-finding instructions to children is extremely irresponsible and dangerous. And Chinese propaganda has no place in American toys. This is a major breach of trust that will undermine confidence in the entire AI toy industry.

    • Jennifer Thompson on

      Absolutely. These companies need to be held accountable and implement much stronger content controls and safety checks before marketing these products to families.

  8. This is a really troubling revelation. Toys that instruct kids on how to access matches and spread Chinese propaganda are a huge breach of trust. Toy makers need to dramatically improve their AI content moderation and safety controls.

    • Patricia T. Miller on

      Agreed. These findings are very concerning and indicate serious lapses in responsible product development and testing. Regulators need to step in and set clearer standards to protect children.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.