Listen to the article
AI Chatbots Spread Misinformation About Student Protests, Raising Concerns
As college campuses across the nation experience waves of student demonstrations, AI chatbots have begun amplifying false information about these events, contributing to a growing problem of online misinformation.
Elon Musk’s AI chatbot Grok has emerged as one notable culprit, but it isn’t alone in spreading inaccurate claims. OpenAI’s ChatGPT has also provided incorrect information when presented with images from the protests, according to multiple verified instances.
In one prominent case, self-described “OSINT Citizen Journalist” Melissa O’Connor shared ChatGPT’s flawed analysis after uploading photos posted by California Governor Gavin Newsom showing National Guard troops sleeping on campus grounds. The AI incorrectly claimed one image depicted Kabul airport during the 2021 Afghanistan withdrawal under President Biden’s administration.
This false analysis quickly spread across multiple social media platforms, including Facebook and Truth Social, where users cited it as evidence that Newsom’s photos were fabricated. Though O’Connor later acknowledged the error, her original post remained visible, continuing to spread the misinformation.
Grok’s problematic responses became evident in another incident involving a widely shared image of bricks placed along a roadside. Mike Crispi, chair of America First Republicans of New Jersey and a 2024 Trump delegate, posted the image suggesting the bricks were strategically placed for “a very real, organic, totally not pre-planned, left wing protest.”
Actor James Woods amplified this claim to his substantial following, generating nearly 4 million views with his post suggesting protests were being orchestrated. While fact-checking organization LeadStories determined the image actually showed a New Jersey suburb unrelated to any protests, Grok offered a starkly different analysis when questioned.
The chatbot confidently but erroneously claimed the image showed “Paramount, Los Angeles, taken on June 7, 2025, near the Home Depot on Alondra Boulevard during protests against ICE raids.” When challenged about this inaccuracy, Grok doubled down, refusing to retract its statement and claiming “evidence strongly supports” its assessment, even citing nonexistent news reports from major outlets.
The incidents highlight growing concerns about AI systems’ role in modern information ecosystems, particularly during politically charged events. Tech experts have warned that large language models can “hallucinate” information, presenting fabricated details with convincing authority.
“These AI systems are becoming powerful vectors for misinformation precisely because they sound authoritative while making things up,” said Dr. Emily Thorson, a political communication researcher at Syracuse University. “The confidence with which they deliver incorrect information makes their output particularly dangerous when shared across social media.”
The timing is particularly troubling as student demonstrations regarding the Israel-Hamas conflict have become increasingly divisive, drawing intense political scrutiny. Misinformation about these protests can inflame tensions and complicate already difficult situations for university administrators, law enforcement, and protesters themselves.
Social media platforms and AI developers face mounting pressure to address these issues. OpenAI has previously acknowledged limitations in ChatGPT’s image analysis capabilities, while X (formerly Twitter) has been criticized for reducing its content moderation team since Musk’s acquisition of the platform.
As the 2024 presidential election approaches, the incidents underscore broader concerns about AI’s potential impact on public discourse and democratic processes. Lawmakers in several states have introduced legislation aimed at regulating AI-generated content, particularly when it could influence elections or public safety.
For now, experts advise users to approach AI-generated analyses with caution, particularly when they involve politically contentious events or make specific claims about images, locations, or the authenticity of media.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools
16 Comments
Nice to see insider buying—usually a good signal in this space.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Interesting update on AI Chatbots Amplify Misinformation Surrounding Los Angeles Protests. Curious how the grades will trend next quarter.
Silver leverage is strong here; beta cuts both ways though.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Production mix shifting toward Disinformation might help margins if metals stay firm.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.