Listen to the article
AI Industry Leaders Must Apply Social Media Lessons to Address Technology Risks
As artificial intelligence increasingly permeates daily life through prompts, pop-ups, and digital assistants, public concern about its potential harms continues to grow. Recent headlines highlight AI’s alleged role in teen suicides, diminishing human intimacy, spreading defamation, hindering education, and facilitating cyberattacks. Meanwhile, President Donald Trump has moved to override state-level AI regulation in favor of a yet-undefined national approach.
This technological moment echoes the social media revolution that began over two decades ago. Platforms like Facebook, Twitter, Instagram, and TikTok have transformed how we understand friendship, enabled mass movements, and democratized public discourse. However, they’ve simultaneously fueled political polarization, disinformation, harassment, self-harm, alienation, and even violence.
The hard lessons learned from social media’s growth should inform our approach to AI governance, or we risk repeating past mistakes with potentially more severe consequences.
Today’s major platforms employ automated systems to remove content that incites violence, deceives users, promotes bigotry, or violates privacy with increasingly better accuracy. European regulations now require companies to explain algorithmic content recommendations, clearly label political advertisements, and disclose which posts are removed and why.
While content moderation remains imperfect, it has demonstrated potential to limit harmful content while preserving open discourse. Even self-described “free speech absolutist” Elon Musk maintains moderation policies on X, removing child exploitation material and harassment. Meta, despite scaling back moderation in areas like immigration and transgender topics, still enforces approximately 80 pages of community standards governing issues from nudity to terrorism.
Technology companies often resist content restrictions, preferring minimal limitations. Whistleblowers have exposed instances where profit motives trumped user protection. Following revelations about Meta’s platform being used to incite violence against Myanmar’s Rohingya population and to spread foreign influence campaigns in elections, the company established an independent oversight board to balance free expression with values like human dignity and safety.
Large language models (LLMs) present similar free expression challenges to social media, but with key differences. Character.AI is invoking First Amendment protections against a lawsuit filed by the family of a 14-year-old who committed suicide allegedly hoping to unite with a chatbot in the afterlife. In Minnesota, a solar contractor is suing OpenAI for generating false reports about deceptive sales practices.
Unlike social media platforms, which primarily moderate user-generated content, LLM disputes center on information the platform itself provides to users. While social media companies generally enjoy liability protection under Section 230 in the U.S., legal experts largely agree that LLMs will not receive similar shields, creating significant legal exposure for companies with inadequate safeguards.
Current content policies for AI systems remain underdeveloped compared to their social media counterparts. Meta AI’s published “use policy” spans just over three pages, while OpenAI’s guidelines contain roughly 1,000 words—far less comprehensive than Meta’s extensive social media community standards. Despite attempts to avoid content moderation complexities, provisions like Meta’s prohibitions on “impersonation” and “disinformation” by LLMs will inevitably encounter the same interpretive challenges faced by social media platforms.
Both technologies require global policies informed by local cultural awareness while maintaining cross-border consistency. Meta’s Oversight Board applies international human rights law standards to determine appropriate expression limits, particularly to prevent imminent harms like violence incitement—an approach equally applicable to AI-generated content.
Beyond enforcing specific content rules, companies must anticipate, disclose, and mitigate broader systemic risks, as required by EU regulation. While social media can spread disinformation and extremist content, chatbots pose additional concerns, including replacing human connection with artificial engagement and undermining traditional markers of authenticity.
OpenAI CEO Sam Altman has faced criticism for seemingly dismissive attitudes toward mental health concerns related to AI companions and adult content. Technology executives’ empty assurances about user safety have historically proven problematic, leading companies like Meta to increasingly rely on independent experts, civil society organizations, and researchers to inform policies and identify emerging issues.
Meaningful expert engagement requires diverse representation, substantial investment, transparent data sharing, independence safeguards, and willingness to implement recommendations despite potential commercial impacts. Meta deserves recognition for being the only major social media company to subject itself to meaningful external oversight.
As AI development accelerates, ensuring safety requires implementing constraints that have been thoroughly tested to withstand potential harms. Designing appropriate safeguards for AI’s velocity and destructive potential will require applying every lesson learned from digital content moderation—and developing new approaches to address unprecedented challenges.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


20 Comments
Thoughtful piece. Applying social media’s hard lessons to AI development could help prevent similar pitfalls around privacy, content moderation, and user manipulation. Proactive policy is essential.
Well said. Policymakers must act swiftly to get ahead of these issues before AI becomes as ubiquitous as social media.
The article raises valid points about the need for robust AI governance frameworks. Policymakers should collaborate with industry to develop clear guidelines that mitigate risks while fostering innovation.
Absolutely. A balanced approach is key – one that protects the public while enabling the positive potential of AI technology.
This is a timely and important discussion. AI’s potential benefits are vast, but the risks highlighted here cannot be ignored. Comprehensive governance frameworks are urgently needed.
Absolutely. Proactive, collaborative policymaking will be key to ensuring AI empowers society rather than causing harm.
This is a timely and important discussion. The industry must heed the lessons of social media’s growth to ensure AI development prioritizes user wellbeing and responsible innovation.
Well said. Applying these lessons could help prevent AI from having similarly severe societal consequences as social media platforms.
Striking parallels between social media’s growth and the rise of AI. The industry should heed these warnings and prioritize responsible development that puts user safety first.
Agreed. Focusing on ethics, transparency and accountability will be critical as AI becomes more deeply embedded in our lives.
This article raises important questions about the parallels between social media and AI. The industry must approach AI development with a stronger focus on ethics, transparency and user wellbeing.
Well said. Applying the hard lessons from social media’s growth could help prevent AI from having similarly severe societal consequences.
Insightful comparison between social media’s impact and the rise of AI. The industry must heed these warnings to avoid similar pitfalls around privacy, content moderation, and user manipulation.
Agreed. Proactive, collaborative policymaking will be key to ensuring AI empowers society rather than causing harm.
The article makes a compelling case for applying social media’s hard lessons to AI development. Policymakers must act quickly to get ahead of these emerging challenges.
Well-said. Responsible AI innovation that prioritizes user safety and wellbeing should be the top priority.
Interesting perspective. The parallels between social media and AI are concerning. We must learn from past mistakes to ensure AI development prioritizes ethics, transparency, and user wellbeing.
Agreed. Proactively addressing potential harms is crucial as AI becomes more integrated into our lives.
Thoughtful piece. The comparison to social media’s trajectory is concerning. AI developers must prioritize responsible innovation that anticipates and mitigates potential harms.
Agreed. Proactive, collaborative policymaking will be essential to ensuring AI empowers society in a safe and ethical manner.