Listen to the article

0:00
0:00

In the chaotic aftermath of conservative activist Charlie Kirk’s killing on Wednesday, social media platforms were flooded with false claims, conspiracy theories, and posts incorrectly identifying individuals connected to the incident. Many of these misleading narratives were amplified and exacerbated by artificial intelligence tools, raising serious concerns about AI’s role in spreading misinformation during breaking news events.

CBS News identified multiple instances where X’s AI chatbot, Grok, spread incorrect information about the suspect before authorities officially named 22-year-old Tyler Robinson from southern Utah. At least ten posts by Grok misidentified the suspect entirely. Though the chatbot eventually acknowledged its error, posts featuring the wrong individual’s name and image had already circulated widely across the platform.

Beyond misidentification, Grok generated manipulated “enhancements” of FBI-released photos that distorted the suspect’s appearance. One such altered image was initially shared by the Washington County Sheriff’s Office in Utah, which later had to clarify that the image appeared to be “AI enhanced” and had significantly changed clothing details and facial features.

In one striking example, an AI-enhanced image portrayed the suspect as much older than Robinson’s actual age. Another AI-generated video that altered the suspect’s features and garbled his shirt design was shared by an X user with more than two million followers, garnering thousands of reposts.

Even after Utah Governor Spencer Cox confirmed Robinson as the suspect in custody, Grok continued providing contradictory information. Some Grok responses claimed Robinson was a registered Republican, while others identified him as a nonpartisan voter. Official voter registration records indicate Robinson has no party affiliation.

The misinformation extended beyond the suspect’s identity. CBS News documented a dozen instances where Grok incorrectly stated that Kirk was still alive the day after his death. Other responses from the AI chatbot provided a false assassination date, labeled the FBI’s reward offer a “hoax,” and claimed reports about Kirk’s death “remain conflicting” despite official confirmation.

S. Shyam Sundar, a professor at Penn State University and director of the university’s Center for Socially Responsible Artificial Intelligence, explained to CBS News why generative AI tools struggle with real-time accuracy: “They look at what is the most likely next word or next passage. It’s not based on fact checking. It’s not based on any kind of reportage on the scene. It’s more based on the likelihood of this event occurring.”

X did not respond to requests for comment regarding the false information Grok was disseminating.

Other AI platforms demonstrated similar problems. Perplexity’s AI-powered search engine bot on X described the shooting as a “hypothetical scenario” in a since-deleted post and suggested that a White House statement on Kirk’s death was fabricated. A Perplexity spokesperson acknowledged to CBS News that while “accurate AI is the core technology we are building,” the company “never claims to be 100% accurate.” The company has since removed the bot from X.

Google’s AI Overview feature also contributed to the spread of misinformation, incorrectly identifying Hunter Kozak – the last person to ask Kirk a question before he was killed – as the FBI’s person of interest. Google later corrected this error, with a spokesperson noting that “given the rapidly evolving nature of this news, it’s possible that our systems misinterpreted web content or missed some context.”

Professor Sundar highlighted a troubling trend in how people perceive information from AI systems: “People tend to perceive AI as being less biased or more reliable than someone online who they don’t know. We don’t think of machines as being partisan or biased or wanting to sow seeds of dissent.”

Utah Governor Cox suggested during a Thursday press briefing that foreign adversaries, including Russia and China, may be using bots to “instill disinformation and encourage violence” related to the incident. He urged citizens to “ignore those and turn off those streams, and to spend a little more time with our families.”

This cascade of AI-generated misinformation surrounding Kirk’s killing illustrates the growing challenges of maintaining information integrity during breaking news events in an era where artificial intelligence tools can rapidly amplify and generate misleading content.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

20 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.