Listen to the article
Digital distortion: How bots are deepening America’s divide
That angry post or inflammatory comment you saw on social media might not have been written by a human at all. According to Senator Katie Britt (R-Ala.), as many as 20 to 30 percent of social media posts are generated by computer programs known as bots, many powered by artificial intelligence.
“We have to get the word out to the American people about what’s actually happening on social media,” Senator Britt told WVTM 13 in an exclusive interview in Washington, D.C.
These digital manipulators are becoming increasingly sophisticated at mimicking human behavior online, making them difficult to spot. They post content, reply to comments, and even engage in arguments with actual users—all designed to provoke emotional responses and deepen societal divisions.
“These bots can actually do everything that humans can do. They can post, they can reply, and they can generate some stuff and they can analyze the comment from other users,” explains Shuya Feng, associate professor of computer science at the University of Alabama at Birmingham. “Sometimes when other people keep replying, they can argue with people and barely someone can notice that this is a bot.”
The integration of artificial intelligence has made these bots more convincing than ever. When asked to demonstrate how prevalent they are, Feng identified an AI-generated image in just 15 seconds of searching.
One concerning example emerged following the assassination attempt on former President Donald Trump in Butler, Pennsylvania. Cyabra, a company using AI to combat disinformation, uncovered a massive influx of fake profiles spreading false claims that Trump had staged the shooting to gain electoral advantage. These accounts shared AI-generated images purporting to show Trump smiling with blood running down his face.
Senator Britt expressed grave concern about the real-world consequences of bot-driven disinformation, pointing to a recent surge in political violence across America. The assassination of conservative activist Charlie Kirk, an arson attack on Pennsylvania Governor Josh Shapiro’s home, the hammer attack on Paul Pelosi, and the shooting of former President Trump all occurred against a backdrop of increasingly hostile online rhetoric.
“I do worry if we continue on the path that we’re on right now, it’s an unsustainable path and political violence is never acceptable. But the rhetoric that leads to it also has to be checked,” Britt said.
Even more alarming is the possibility that foreign adversaries may be funding some of these bot networks as a form of psychological warfare against the United States. Britt revealed she is working with Attorney General Pam Bondi and FBI Director Kash Patel to investigate potential foreign involvement in domestic disinformation campaigns.
“We need to know the role that other governments are playing or other actors in other countries abroad are playing in creating discourse and/or harm to American citizens,” she said.
Identifying bot accounts requires vigilance and critical thinking. Experts suggest examining several factors: account creation date (many bots appear suddenly), posting frequency (the “CoffeyTimeNews” account mentioned averaged 133 posts daily), lack of personal details, suspicious profile photos, and content patterns that focus exclusively on divisive topics.
Feng recommends pausing before engaging with inflammatory content. “When we see something is really suspicious, then we just pause a little bit, use our critical thinking like, can we verify this information?”
Social media companies bear responsibility too, according to Senator Britt. “These are things they could do right now. They’ve created the most brilliant machines ever imaginable. So you cannot tell me that most brilliant machine cannot also be responsible.”
As investigations continue into who funds and operates these digital manipulation campaigns, the message from experts is clear: Not everything you see online is real, even when it looks convincingly human. As Americans navigate an increasingly fractured information landscape, the ability to identify and ignore automated attempts at division has become an essential civic skill.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


26 Comments
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
I like the balance sheet here—less leverage than peers.
Good point. Watching costs and grades closely.
Interesting update on Digital Distortion: How Automated Bots Are Deepening America’s Political Divide. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Production mix shifting toward Fake Information might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Interesting update on Digital Distortion: How Automated Bots Are Deepening America’s Political Divide. Curious how the grades will trend next quarter.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Interesting update on Digital Distortion: How Automated Bots Are Deepening America’s Political Divide. Curious how the grades will trend next quarter.
Nice to see insider buying—usually a good signal in this space.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.