Listen to the article
Social media platforms across the United States are bracing for what experts anticipate will be an unprecedented wave of election misinformation as the country heads toward a deeply polarized 2024 vote.
Trust in both elections and social media information has reached concerning lows, while the political environment has created fertile ground for rumors and falsehoods to flourish. Recent polls indicate that more than one-third of Americans don’t believe Joe Biden legitimately won the presidency in 2020, a belief that has actually grown stronger over time despite extensive evidence to the contrary.
“Everyone wants to know if the companies are prepared and the unfortunate truth is we just don’t know,” said Katie Harbath, a former Facebook policy executive who now writes about technology’s intersection with democracy.
Social media companies appear to be taking divergent approaches to the challenge. Meta and YouTube have loosened restrictions on 2020 election misinformation, while TikTok has attempted to distance itself from political content altogether. Newer platforms are still developing their content moderation policies, and alternative sites like Truth Social and Telegram have deliberately embraced minimal content restrictions.
Industry-wide layoffs have hit trust and safety teams particularly hard, raising questions about platforms’ capacity to monitor and respond to misinformation at scale. While companies typically prioritize U.S. elections, the global nature of these platforms means elections in other democracies may receive less scrutiny and protection from disinformation campaigns.
The rapid advancement of artificial intelligence tools has introduced new complications to an already challenging landscape. Recent incidents have demonstrated AI’s potential to create convincing false content, including a fake audio call purporting to be President Biden telling New Hampshire voters to stay home, AI-generated images of Donald Trump being arrested, and fabricated images of Biden in military uniform.
In response to these emerging threats, the Federal Communications Commission announced a ban on AI-generated voices in robocalls on February 8, but experts warn regulation is struggling to keep pace with technological innovation.
“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration, our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” explained David Ryan Polgar, founder and president of the non-profit All Tech is Human.
Perhaps no platform exemplifies the shifting approach to election content more than X (formerly Twitter). Under Elon Musk’s ownership, the company has dramatically transformed its content moderation philosophy, eliminating many tools designed to flag potential misinformation and reinstating accounts previously banned for spreading election falsehoods. Musk himself has been criticized for sharing election misinformation.
The platform now relies primarily on “community notes” for fact-checking rather than official interventions. Cybersecurity expert Chester Wisniewski of Sophos observed that X’s capacity to accelerate misinformation in 2024 is greater without its previous safeguards, though he suggested the platform may have “discredited itself enough” that users now place less trust in content they encounter there.
“Four years ago, the president of the United States used that platform as his primary way to communicate with the public – like it was literally the voice of the president’s office,” Wisniewski said. “And it went from that to cryptocurrency scams in four years.”
Adding to these challenges, researchers tracking misinformation face increasing obstacles. Republican lawmakers, led by Congressman Jim Jordan, have launched investigations into social media platforms’ content moderation practices, claiming they unfairly target conservative viewpoints. These political pressures, combined with ongoing legal challenges including a pending Supreme Court case, have created what researchers describe as a “chilling effect” on their work.
The restriction of researcher access to platform data hampers efforts to identify coordinated misinformation campaigns that often spread across multiple platforms. As Harbath ominously noted, “You don’t have to influence a lot of people for something like January 6.”
With less than nine months until Election Day, the intersection of social media, misinformation, and American democracy continues to raise profound questions about how private companies should navigate their outsized influence on public discourse, especially when the country remains so deeply divided.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools

									 
					
								
10 Comments
The rise of misinformation is a real challenge, especially around high-stakes political events. While social media companies have a role to play, ultimately it’s up to users to think critically about the information they consume and share online.
Agreed. Media literacy and personal responsibility will be crucial in combating the spread of misinformation. Social media can provide the tools, but individuals need to use them wisely.
Interesting to see the divergent approaches social media companies are taking to address this issue. I’m curious to see if a unified, collaborative effort could be more effective than individual platform policies.
That’s a good observation. An industry-wide, multi-stakeholder approach may yield better results than each company acting in isolation. Coordination and shared best practices could be beneficial.
While the focus here is on the 2024 US election, the challenge of misinformation is a global issue that affects many industries and sectors. I hope to see continued innovation and collaboration to address this complex problem.
Well said. Misinformation knows no borders, and a coordinated international effort may be needed to truly tackle this issue effectively. It’s an important challenge that will require sustained attention and action.
Interesting to see how social media platforms are navigating the tricky balance between free speech and misinformation ahead of the 2024 US election. Curious to see if they can strike the right tone and be proactive in addressing concerns without overly censoring legitimate political discourse.
You raise a good point. Content moderation on social media is a complex issue with no easy solutions. It will be important for these platforms to be transparent about their policies and work to build public trust.
As someone with an interest in the mining and energy sectors, I’m curious to see how this misinformation challenge could impact those industries and related investments. Reliable, fact-based information will be crucial for making informed decisions.
Absolutely. Misinformation can have real-world impacts, especially in industries like mining and energy where accurate, up-to-date information is critical. Investors will need to be vigilant in separating fact from fiction.