Listen to the article
In a world increasingly dominated by artificial intelligence, ChatGPT has emerged as the public’s first widespread introduction to sophisticated AI systems. Developed by OpenAI, this powerful language model processes text with an uncanny human-like quality, drawing from a massive training dataset of 300 billion words sourced from books, magazines, Wikipedia, and various internet repositories.
While millions have embraced ChatGPT for everyday tasks from recipe suggestions to speech writing assistance, cybersecurity experts and researchers are raising alarm bells about its potential misuse. The technology’s ability to generate convincing human-like text has created new concerns about the future of online disinformation campaigns.
A January report co-authored by Josh Goldstein, a research fellow at Georgetown’s Center for Security and Emerging Technology, warned that language models like ChatGPT could provide “distinct advantages to propagandists.” The technology allows bad actors to generate unique, persuasive content at unprecedented scale and quality.
“Generative language models could produce a high volume of content that is original each time,” Goldstein explains. This eliminates the need for propagandists to reuse identical text across platforms, making detection significantly more difficult.
The 2016 US presidential election provided a sobering example of how coordinated disinformation campaigns can influence democratic processes. Russian-linked accounts spread thousands of social media posts designed to disrupt Hillary Clinton’s campaign. But experts warn that future elections may face exponentially more sophisticated attacks.
Gary Marcus, AI specialist and founder of Geometric Intelligence, compares the situation to spam operations on steroids. “People who spread spam around rely on the most gullible people to click on their links, using that spray and pray approach of reaching as many people as possible. But with AI, that squirt gun can become the biggest Super Soaker of all time.”
Even if social platforms manage to take down most AI-generated disinformation, Marcus argues that the sheer volume would still leave “at least 10 times as much content as before that can still aim to mislead people online.”
Vincent Conitzer, professor of computer science at Carnegie Mellon University, highlights another troubling dimension: the creation of convincing fake accounts. “Something like ChatGPT can scale that spread of fake accounts on a level we haven’t seen before,” he says, “and it can become harder to distinguish each of those accounts from human beings.”
Security firm WithSecure Intelligence has also warned about AI’s potential to rapidly generate fake news articles designed to be distributed across social networks. These fabricated stories could target voters immediately before elections, potentially swaying outcomes through coordinated disinformation.
The responsibility of social media platforms in addressing this looming crisis remains a contentious issue. Luís A Nunes Amaral, co-director of the Northwestern Institute on Complex Systems, expresses skepticism about platforms’ willingness to act decisively. “Facebook and other platforms should be flagging phony content, but Facebook has been failing that test spectacularly,” he argues.
Amaral suggests that financial considerations may influence platform policies. “The reasons for that inaction include the expense of monitoring every single post, and also realize that these fake posts are meant to infuriate and divide people, which drives engagement. That’s beneficial to Facebook.”
As more organizations and governments develop their own advanced language models, experts worry that propagandists’ access to these technologies will expand. Currently, top-tier models remain in the hands of relatively few entities, but this exclusivity is unlikely to last as development accelerates.
The convergence of sophisticated AI language models with social media presents unprecedented challenges for information integrity in democratic societies. Without robust detection systems and responsible platform governance, the already blurry line between authentic human communication and artificial manipulation threatens to disappear entirely.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The potential for AI to generate persuasive, high-quality content at scale is alarming. Regulators and platforms must act quickly to stay ahead of bad actors looking to exploit this technology for disinformation campaigns.
Agreed. Transparency and accountability measures around AI content generation will be crucial to maintain the integrity of online spaces.
Fascinating to see how AI is evolving and the potential societal impacts, both positive and negative. Responsible development and deployment of these powerful language models will be critical going forward.
As an energy investor, I’m particularly concerned about the potential for AI-powered disinformation to impact public perception and policy around critical minerals, fossil fuels, and renewable technologies. Fact-based discourse is essential.
This is a concerning development. AI-powered fake accounts could seriously undermine online discourse and trust. Robust safeguards are needed to prevent misuse while still allowing beneficial AI applications.
I wonder if AI-generated accounts could also be used to artificially inflate engagement metrics or game algorithms on social media platforms. This could have far-reaching consequences for businesses relying on those platforms.
That’s a great point. Algorithmic manipulation through AI-driven fake accounts could significantly distort online trends and behavior, with major implications for marketing, advertising, and more.
As an investor in mining and energy stocks, I’m curious how this issue could impact those industries. Misinformation around commodity prices, regulatory changes, or new technologies could significantly move markets.