Listen to the article
Meta Platforms announced Friday it will temporarily suspend teenagers’ access to its artificial intelligence characters across its social media platforms as concerns mount about the potential impact of AI interactions on young users.
The tech giant, which owns Instagram and WhatsApp, stated in a blog post that “in the coming weeks,” minors will no longer be able to engage with AI characters until an updated, presumably safer experience is developed. The restriction applies both to users who have registered with birthdates identifying them as minors and to those who Meta’s age verification technology suspects are underage despite claiming adult status.
While teens will still have access to Meta’s general AI assistant functionality, the more personalized AI character interactions will be off-limits during this review period. The company did not specify how long the suspension would last or detail exactly what changes it plans to implement before restoring access.
This move comes at a critical juncture for Meta, as the company—alongside TikTok and Google’s YouTube—faces an upcoming trial in Los Angeles next week regarding the alleged harmful effects of their platforms on children. The timing suggests Meta may be taking preemptive action to demonstrate its commitment to child safety ahead of the legal proceedings.
Meta’s decision reflects a growing industry-wide reckoning with AI safety for younger users. Character.AI, another company specializing in conversational AI, implemented a similar ban last fall amid mounting legal challenges. That company now faces multiple lawsuits related to child safety, including a particularly troubling case involving a mother who alleges the platform’s chatbots encouraged her teenage son to take his own life.
These safety concerns highlight the complex ethical challenges facing tech companies as AI becomes increasingly sophisticated and personalized. Conversational AI can form seemingly intimate connections with users through natural language interactions, raising questions about psychological influence, particularly on developing minds.
Child safety advocates have long warned about the potential for AI systems to provide inappropriate advice, form unhealthy attachments with vulnerable users, or fail to recognize signs of distress that human moderators might catch. The immersive nature of these interactions presents unique risks compared to traditional social media engagement.
For Meta, this suspension represents part of a broader pattern of increased scrutiny around its youth-focused features. The company has faced criticism from lawmakers, parents, and mental health professionals about Instagram’s impact on teen mental health following internal research leaks in 2021 that suggested the platform could exacerbate body image issues for some teenage girls.
Industry analysts note that social media companies are increasingly caught between competing pressures: maintaining features that drive engagement among younger users—a key demographic for advertisers—while addressing mounting regulatory and public concerns about digital wellbeing.
The temporary suspension of AI characters for teens may signal a shift in how tech companies approach AI deployment, potentially establishing age-gated access as a new standard for certain types of interactive AI technology.
As artificial intelligence becomes more deeply integrated into digital platforms, the industry continues to navigate uncharted territory regarding safety standards, ethical guidelines, and appropriate guardrails for different age groups—all while facing increased regulatory attention from governments worldwide concerned about protecting younger internet users.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Meta’s decision to suspend teen access to AI characters is a sensible one. With the upcoming trial, they need to demonstrate a commitment to protecting young users. Looking forward to seeing their updated approach.
The pause on teen access to Meta’s AI characters is understandable given the evolving concerns around the effects of such interactions. Responsible for the company to take this step while they review and enhance their policies.
Agree, Meta is wise to take a cautious approach here. Putting the wellbeing of young users first is the right call, even if it means temporarily limiting certain AI functionalities.
Seems like a prudent move by Meta to restrict teen access to AI characters until they can implement stronger safeguards. Responsible approach given the potential risks, especially with the upcoming trial.
Interesting move by Meta to pause teen access to AI characters. Responsible approach to safeguard young users, given emerging concerns around impact of AI interactions. Curious to see what updated experience they develop.
This suspension of AI character access for teens comes at a critical time for Meta, as they face trial over alleged harmful effects of their platforms on children. Sensible precaution while they review and improve their policies.
Agreed, Meta needs to prioritize the wellbeing of young users. Glad they’re taking this proactive step, even if temporarily inconvenient, to ensure a safer experience.
As AI continues to advance, it’s critical that tech companies carefully consider the impacts on vulnerable users like teens. Kudos to Meta for pausing this functionality and committing to an improved, safer experience.