Listen to the article
Artificial intelligence advancements pose a growing threat to political discourse as November’s midterm elections approach, according to cybersecurity expert Torry Crass, who formerly worked for North Carolina’s Department of Information Technology and state Board of Elections.
During a virtual presentation hosted by the Catawba College Center for North Carolina Politics and Public Service, Crass warned that AI technology has transformed how bots operate on social media platforms, making them increasingly sophisticated and harder to detect.
“There is really a kind of transformation that has taken place in the last couple of years related to bots, as well as it pertaining to mis-and-disinformation,” Crass explained. “As we get closer to the election, I would expect to see the generated AI content go significantly higher than what it is today.”
The stakes are particularly high in North Carolina, where the U.S. Senate race between Democrat Roy Cooper and Republican Michael Whatley is anticipated to be one of the nation’s most competitive contests. The November elections will determine control of Congress and state legislatures across the country.
The regulatory landscape offers little protection against these threats. Federal laws do not prohibit using AI to create misleading audio or video in political advertisements, leaving voters vulnerable to increasingly sophisticated manipulation attempts.
Political groups have already embraced AI technology in various ways. Some applications have been relatively benign, such as generating footage of military jets flying over campaign events. However, more concerning uses include impersonating candidates’ images or voices and creating social media accounts designed specifically to generate outrage that influences voter perceptions.
What makes today’s AI-powered disinformation campaigns particularly concerning is their improved quality. In previous election cycles, bot-controlled social media accounts often contained obvious flaws that made them easier to identify. These might have included using only a single profile photo or having inconsistencies in biographical information.
Modern AI tools can now generate authentic-looking images and create more consistent, believable social media personas. This advancement significantly raises the bar for voter vigilance.
Despite these improvements, Crass noted that voters can still identify potential bot accounts by looking for certain warning signs. Newly created accounts that appeared just months before an election might indicate an intention to influence voters. Similarly, accounts posting nearly identical content to other profiles often suggest automated operation.
Other red flags include accounts that post regularly during overnight hours when most Americans are sleeping, potentially indicating automation or foreign operation. Additionally, accounts focused exclusively on political content without any personal posts may suggest a bot designed for a specific purpose.
“That is indicative of how AI functions, because it’s purpose-built,” Crass said. “Each of these things is a breadcrumb and a red flag. It doesn’t necessarily mean something is malicious or bad, but when you start seeing these red flags, and you start adding them up, it starts becoming very clear that something more is going on.”
The rise of AI-powered influence campaigns represents a new frontier in election security concerns. Unlike traditional cybersecurity threats that target voting infrastructure, these campaigns operate in the social media ecosystem where Americans increasingly form their political opinions.
Technology companies face mounting pressure to address the spread of AI-generated content on their platforms, though their approaches vary widely. Some have implemented labeling systems for AI-generated content, while others rely on users to report suspicious activity.
As the November elections draw closer, cybersecurity experts like Crass advocate for increased digital literacy among voters. Being able to identify potential bot accounts and understand the techniques used to spread misinformation may become as important as traditional civic knowledge in maintaining the integrity of the democratic process.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The rise of AI-generated content is certainly concerning. I appreciate the expert’s warning and hope that election officials and social media platforms are taking proactive steps to identify and remove these kinds of fake accounts before they can cause real damage.
This is a complex challenge with no easy solutions. I’m glad to see experts like Crass sounding the alarm and providing guidance on how to spot these increasingly sophisticated bots. Staying vigilant and educating the public will be key.
Agreed. We all have a role to play in safeguarding the democratic process. I hope this issue receives the attention and resources it deserves from policymakers and tech companies.
As someone with an interest in the energy and mining sectors, I’m curious to see how this issue may impact discussions around related policies and investment decisions. Accurate information is crucial for making informed choices.
This is a timely and concerning issue. Identifying and addressing fake social media accounts ahead of critical elections is crucial for preserving the integrity of the democratic process. I look forward to learning more about the AI advancements that are making this increasingly challenging.
You’re right, the stakes are high. We must remain vigilant against disinformation campaigns that seek to sow division and undermine faith in our institutions.
As someone with an interest in the mining and commodities sectors, I’m curious to see how this issue may impact discussions around related equities and policy decisions. Accurate information is key for investors and the public to make informed choices.
That’s a good point. Fake accounts could be used to spread misinformation that affects market perceptions and trading decisions. Maintaining transparency and trust in these critical industries is paramount.