Listen to the article
Evolutionary Biologist Brings Expertise on Misinformation and AI to New Zealand
For Carl Bergstrom, misinformation behaves like a living organism, capable of traveling across networks and spreading through populations similar to pathogens. The University of Washington theoretical and evolutionary biologist will visit the University of Otago this week, delivering three public talks focused on his research into how large language models (LLMs) contribute to misinformation and their broader societal impact.
Bergstrom, a founding member of the University of Washington Center for an Informed Public, has dedicated his career to combating strategic misinformation while promoting an informed society and strengthening democratic discourse. His upcoming presentations in New Zealand reflect growing global concern about artificial intelligence’s role in spreading false information.
His research specifically analyzes how misleading data propagates, particularly in scientific communication, academic journals, and preprints. Recently, he has expanded his focus to investigate AI-generated content, including synthetic media like fake faces, and what he terms the “Russian fire hose” strategy—a technique that overwhelms audiences with false information to obscure truth.
“The challenge with modern misinformation is not just its volume but its sophistication,” Bergstrom noted in previous publications. His work involves developing practical tools and techniques to help the public identify reliable evidence and combat what he bluntly calls “bulls…”—false or misleading information designed to deceive.
During the COVID-19 pandemic, Bergstrom took an active role in countering false claims about the virus, vaccines, and treatments. His expertise in evolutionary biology provided him with a unique perspective on how misinformation mutates and adapts as it spreads through social networks.
At the University of Washington, Bergstrom created a popular course examining how data, statistics, and graphics can be manipulated to mislead audiences. This work culminated in a book on the subject. More recently, he developed a second course focused on helping people navigate a world where LLMs like ChatGPT have become ubiquitous.
His first Otago talk, “Modern-day oracles or bulls… machines? How to thrive in a ChatGPT world,” scheduled for Tuesday at noon, will explore this territory. The presentation aims to provide practical guidance for living in an era where AI-generated content is increasingly difficult to distinguish from human-created material.
Also on Tuesday, Bergstrom will deliver a second talk examining social media and information technology’s impact on society. This presentation will trace how current information ecosystems evolved and explore potential future trajectories as technology continues to advance.
His final presentation on Wednesday will address challenges in academic publishing. Scholarly publishing relies heavily on peer review to validate scientific work, but finding qualified reviewers has become increasingly difficult. Bergstrom will discuss how this threatens the long-term viability of peer review as an institution and explore potential solutions.
The issue holds particular relevance as AI tools now make producing convincing but potentially flawed academic papers easier than ever before. Several academic journals have already reported detecting and rejecting AI-generated submissions that contained fabricated data or references.
Bergstrom’s visit comes at a critical time when New Zealand, like many nations, grapples with the double-edged nature of AI technologies. While tools like ChatGPT and other LLMs offer tremendous benefits for education, research, and productivity, they simultaneously create new vectors for misinformation that can undermine public trust in institutions.
His talks at Otago represent an opportunity for researchers, students, and the public to engage with one of the leading voices on information literacy in the age of artificial intelligence and learn strategies for navigating an increasingly complex information landscape.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
As someone with a background in science communication, I’m particularly interested in the expert’s insights on misinformation in academic and scientific settings. That’s a concerning trend that needs to be addressed.
Absolutely. Maintaining the integrity of scientific discourse is vital for an informed society. I hope the expert provides practical guidance on how to identify and counter misinformation in those spaces.
Investigating the role of AI in the propagation of misinformation is a crucial area of research. I look forward to learning more about the specific strategies and techniques the expert will discuss.
Very interesting topic. I’m curious to learn more about how misinformation spreads and what can be done to combat it, especially with the rise of AI-generated content. Looking forward to hearing the expert’s insights.
Agreed, the role of AI in the spread of misinformation is a critical issue that deserves close examination. Fact-checking and digital literacy will be key to building societal resilience.
This is an important and timely topic. I’m curious to hear the expert’s views on how we can build societal resilience against the spread of misinformation, both online and offline.
Agreed. Strengthening digital literacy and critical thinking skills will be key, along with developing more robust systems for verifying information and fact-checking claims.
I’m glad to see an academic expert tackling this issue head-on. Misinformation erodes public trust, so it’s critical that we find ways to identify and debunk false narratives, especially those amplified by AI.
Combating misinformation is such an important challenge, especially with the speed and scale at which false narratives can now travel online. This expert’s research should provide valuable perspective.
Absolutely. Understanding the mechanisms behind how misinformation spreads, and developing effective countermeasures, will be crucial for maintaining an informed citizenry.