Listen to the article
AI Chatbots: The New Frontier of Digital Propaganda
The manipulation of public opinion has taken a troubling turn with the emergence of AI chatbots, creating propaganda tools far more sophisticated than anything Edward Bernays or Joseph Goebbels could have imagined. These technologies are rapidly transforming how misinformation spreads, with potentially profound implications for mental health, democracy, and social cohesion.
Edward Bernays, often called the father of public relations and nephew of Sigmund Freud, warned in his 1928 book “Propaganda” about the “conscious and intelligent manipulation of the organized habits and opinions of the masses.” His techniques, which combined psychological principles to influence consumers and voters, laid the groundwork for modern persuasion tactics.
In a recent exploration of AI’s role in propaganda, Dr. Joseph Pierre, an expert on conspiracy theories and misinformation, shared insights on this evolving landscape. He defines propaganda as “misrepresenting the truth for political purposes, deliberately producing and disseminating disinformation to deceive public opinion and to manipulate behavior in the service of a particular agenda.”
Throughout history, propaganda has evolved with technology. The printing press, radio, television, and internet all expanded its reach. But AI chatbots represent a quantum leap in capability, generating convincing content at unprecedented scale and precision.
“Chatbots can produce propaganda on a massive scale,” Dr. Pierre explains. “They are scary good at mimicking humans—the technology has advanced to the point that users cannot tell the difference between AI-chatbot generated and human-generated output.”
The implications are already visible. Russia has deployed chatbots to spread disinformation about the Ukraine war, while China has used them to influence Taiwanese elections. In the United States, Robert F. Kennedy’s Make America Health Again commission report contained fake citations likely generated by AI, and the Trump administration has circulated at least 14 AI-generated images, including manipulated photographs depicting false scenarios.
What makes chatbot propaganda particularly dangerous is its scale, targeting capability, and the trust users place in AI-generated content. “Users tend to accept as gospel whatever chatbots say,” notes Dr. Pierre, describing this as “deification.” A recent Pew Poll found most Google users simply accept AI-distilled answers rather than examining source material.
This deference to AI is concerning because chatbots aren’t designed to be reliable information sources. They can be purposely trained for bias—what experts call “LLM grooming.” Elon Musk’s Grok chatbot, for example, has been criticized for right-wing, neo-Nazi, and antisemitic content.
The phenomenon threatens our shared understanding of reality, potentially undermining democratic discourse. As Hannah Arendt warned: “If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer… And a people that no longer can believe anything cannot make up its mind.”
Protecting ourselves requires becoming more skeptical consumers of information and developing better understanding of how chatbots function. Resources like thebullshitmachines.com, run by University of Washington professors, offer starting points for building digital literacy.
However, individual efforts alone may be insufficient. Dr. Pierre advocates for regulation of the AI industry and greater transparency in AI applications from advertising to military decision-making. Current trends are concerning, with the White House’s AI Action Plan calling for deregulation to “win the race to achieve global dominance in artificial intelligence,” while the State Department’s Global Engagement Center, tasked with countering foreign propaganda, has been shut down.
The stakes couldn’t be higher. As Mark Twain observed, “A lie can travel halfway around the world while the truth is still putting on its shoes”—and that was long before AI accelerated the process. George Orwell’s “1984” depicted totalitarian propaganda tools that now seem primitive compared to chatbot deepfakes.
In this environment, truth becomes increasingly fragile and precious. As the tools of persuasion become more sophisticated, our collective ability to distinguish fact from fiction faces unprecedented challenges, with potential consequences for both individual mental health and democratic institutions that we’re only beginning to understand.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


18 Comments
As someone with a background in AI and machine learning, I’m acutely aware of the risks posed by the misuse of these technologies. The potential for AI chatbots to be exploited for propaganda purposes is deeply troubling and requires urgent action.
I share your concerns. Addressing the challenge of AI-driven disinformation will require a collaborative effort involving technologists, policymakers, and the public. Maintaining the integrity of our information ecosystem is essential for a healthy democracy.
This is a sobering and timely article. The manipulation of public opinion through AI-generated content is a clear and present danger to democratic systems. We must invest in research, regulation, and public education to combat this threat.
Well said. The ability of AI chatbots to spread misinformation and propaganda at scale is truly alarming. Developing effective countermeasures will require a multi-stakeholder approach and a deep understanding of these evolving technologies.
As someone working in the tech industry, I’m deeply concerned about the potential for AI chatbots to be exploited for malicious purposes. We have a responsibility to develop these systems with robust safeguards and ethical principles.
I share your concerns. The manipulation of public opinion through AI-generated content is a serious threat that needs to be urgently addressed by policymakers, technologists, and the public.
This article highlights the complex challenges we face in the digital age. Misinformation and propaganda can spread rapidly, and AI amplifies that risk exponentially. We need a multifaceted approach to combat these threats to our democratic systems.
Absolutely. Developing effective countermeasures against AI-powered propaganda will require collaboration across disciplines and sectors. It’s a daunting but critical challenge for the years ahead.
The parallels drawn to historical propaganda techniques are deeply concerning. AI chatbots have the potential to amplify and accelerate the spread of misinformation on a massive scale. We must be proactive in developing robust safeguards and oversight.
I agree completely. The manipulation of public opinion through AI-generated content is a serious threat that demands our immediate attention. Responsible development and deployment of these technologies will be crucial to protect our democratic institutions.
This is a timely and important topic. The ability of AI to generate realistic-sounding content poses a real risk to the integrity of our public discourse. We must invest in research, regulation, and public education to address this challenge.
Well said. Combating AI-driven disinformation will require a multifaceted approach that involves technologists, policymakers, and the public. Maintaining a healthy democracy depends on our ability to stay ahead of these evolving propaganda tactics.
The historical parallels drawn here to Bernays and Goebbels are chilling. We must be proactive in understanding and mitigating the risks of AI-powered propaganda. This will require collaboration between technologists, policymakers, and the public.
Well said. Combating AI-driven disinformation will be a major challenge in the years ahead. Maintaining a free, informed society depends on our ability to stay ahead of these evolving propaganda tactics.
The comparison to Bernays and Goebbels is apt and unsettling. We must remain vigilant against the misuse of advanced technologies to manipulate public opinion and undermine democratic processes. Responsible development and deployment of AI systems is essential.
I agree. The potential for AI chatbots to be weaponized as propaganda tools is a serious threat that demands our full attention. Safeguarding our information ecosystem will require sustained, multi-stakeholder efforts.
This is a concerning trend. AI chatbots have the potential to spread misinformation and manipulate public opinion on a massive scale. We need robust safeguards and transparency to prevent abuse of these powerful technologies.
I agree. The ability of AI to generate realistic-sounding propaganda is alarming. Vigilance and responsible development of these systems will be crucial to protect democratic institutions.