Listen to the article

0:00
0:00

Brandeis experts gathered last week to address the growing influence of artificial intelligence on political systems, highlighting concerns about misinformation and the changing landscape of global competition in the digital age.

The panel discussion, hosted by the Brandeis Society for International Affairs and the Alexander Hamilton Society, brought together three faculty members with expertise spanning political science, international relations, and computational linguistics to examine how AI technologies are reshaping democracy and international relations.

Professor Steven Wilson, whose research focuses on the intersection of internet and politics, emphasized the fundamental threat AI-generated misinformation poses to democratic processes. “Democracy doesn’t function without communication,” Wilson explained during the panel. He expressed particular concern about what some call “dead internet theory” – the idea that online spaces are increasingly dominated by automated bots and mass-produced content rather than genuine human interaction.

When communication channels become flooded with misleading or false information, Wilson warned, the consequences for democratic governance are severe. “It’s horrible for democracy,” he stated, pointing to the essential role reliable information plays in citizen decision-making.

Professor Constantine Lignos, an expert in natural language processing, expanded on these concerns by highlighting the invisible influence of recommendation algorithms that already shape what political content users encounter online. “Almost everything that you see is in some way affected by a recommendation system,” Lignos noted in a follow-up interview with The Justice, the university’s student newspaper.

The computational linguist pointed to recent high-profile examples of deepfakes, including a fabricated video that falsely showed President Biden instructing voters not to participate in an election. Such incidents demonstrate how AI technology enables the rapid production and dissemination of convincing but entirely fabricated political content.

A particularly troubling aspect of AI development, according to Lignos, is the increasing difficulty in distinguishing between human-created and AI-generated content. Current detection tools remain unreliable, creating a situation where AI outputs are inadvertently incorporated into training data for newer AI systems. “As these models become better at hiding the fact that they are models, their output becomes less distinguishable from people,” he explained, describing this as an “open problem” that researchers have yet to solve.

The global dimensions of AI development featured prominently in the discussion, with Professor Ayumi Teraoka addressing the intensifying technological competition between the United States and China. She noted that less expensive AI models like DeepSeek are gaining traction in countries that cannot afford more sophisticated systems, potentially expanding the global impact of these technologies.

Teraoka also highlighted how military applications of AI are advancing rapidly, with armed forces increasingly dependent on AI-driven data analysis. The substantial energy requirements of new data centers supporting these technologies raise additional national security questions that policymakers must address.

Despite these challenges, the panel recognized AI’s potential benefits. Wilson suggested that digital tools can empower citizens by providing new ways to organize, expose corruption, or challenge official narratives. “Being able to leverage that tool is democratizing in the sense that it puts power in the hands of anybody who’s willing to use it,” he observed.

The academic research landscape is similarly being transformed. Political scientists can now analyze vast text datasets and identify online patterns that would have been impossible to study through manual methods. “There’s not enough time in the universe for you to do that by hand,” Wilson remarked, explaining how automated systems enable entirely new research approaches.

All three professors emphasized that effective governance of AI technology will require thoughtful collaboration between government regulators, private companies, and academic researchers. “We have a responsibility to try to guide it in ways that it can be socially constructive,” Wilson concluded.

The event organizers viewed the discussion as an opportunity to foster interdisciplinary dialogue on campus. Stephen Gaughan, who moderated the panel, highlighted the importance of bringing together diverse perspectives from across the university community. “I think this event was a really great opportunity to think about things in different ways, to apply different perspectives and to learn about how important things can relate to each other,” he said.

As AI technologies continue to advance and integrate further into political systems and everyday life, such cross-disciplinary conversations will become increasingly vital for understanding and addressing the complex challenges they present.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

28 Comments

  1. Mary Hernandez on

    Interesting update on Inside the Debate: Exploring AI’s Impact on Politics. Curious how the grades will trend next quarter.

  2. Liam Y. Johnson on

    Interesting update on Inside the Debate: Exploring AI’s Impact on Politics. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.