Listen to the article

0:00
0:00

Professor Calls for Global Credibility Institute to Counter AI Disinformation Threat

As artificial intelligence transforms the landscape of digital content creation, the line between real and fake information is rapidly disappearing, warns Associate Professor Jieun Shin of the University of Florida. In a comprehensive analysis published by the Stimson Center’s Korea Program, Shin argues that AI represents a critical inflection point in the global information landscape that requires urgent international collaboration.

The research, part of a project examining AI disinformation implications for the US-ROK alliance, highlights how generative AI tools have democratized the creation of sophisticated fake content, enabling anyone to produce convincing misinformation with minimal effort and cost.

“If social media transformed how misinformation is consumed and redistributed, the recent wave of AI technology is now disrupting the very nature of content creation,” Shin writes, pointing to alarming statistics that forecast deepfake videos could increase sixteen-fold from 500,000 in 2023 to 8 million by 2025.

The consequences extend far beyond technological novelty. A recent NewsGuard report found that leading AI chatbots spread false information 35% of the time when prompted with questions about controversial news topics—nearly double the rate observed a year earlier.

The political implications are already evident worldwide. According to cybersecurity firm Recorded Future, at least 38 countries experienced deepfake incidents targeting public figures within a single year, with most cases linked to elections. One notable example occurred during the 2024 U.S. primaries, when a deepfake robocall imitating President Biden urged New Hampshire voters not to cast ballots.

Beyond politics, AI is fueling a surge in non-consensual deepfake pornography, with women disproportionately targeted. Watchdog organizations have documented an exponential rise in such content since 2023, with reports of AI-generated sexual abuse materials escalating into the millions, while legal protections struggle to keep pace.

The psychological and social consequences are profound. Studies from the Reuters Institute and the University of Michigan indicate that exposure to hyperrealistic misinformation undermines confidence in distinguishing fact from fiction, breeding what scholars describe as “truth fatigue.” This growing skepticism contributes to news avoidance and social disengagement, with particularly concerning impacts on youth and vulnerable populations.

While tech platforms have implemented some measures to combat AI-generated misinformation, Shin notes their response has been less coordinated and explicit compared to earlier crises like the COVID-19 pandemic. “Platforms are operating in an early stage of regulatory and technical uncertainty, where the boundaries between creative innovation and harmful manipulation remain difficult to define,” she explains.

The regulatory landscape remains fragmented globally. The United States has developed incremental regulation primarily through state initiatives, while the European Union has introduced a more comprehensive framework through its AI Act requiring clear disclosure of artificially generated content. China has enforced “Deep Synthesis” provisions since 2023 requiring clear labeling of AI-generated content, and South Korea has established a multi-layered approach including the AI Basic Act and amendments to its Sexual Crime Act.

Shin argues that neither industry safeguards nor national regulations alone can address this borderless threat. She proposes the creation of a global credibility institute bringing together journalists, researchers, technologists, and policymakers to establish common standards for content disclosure and authenticity.

“The United States and South Korea are well-positioned to lead in shaping the global architecture for AI governance and information integrity,” Shin suggests, citing recent bilateral agreements including the Seoul Declaration and the US-Korea Technology Prosperity Deal as potential foundations.

The stakes could not be higher, according to Shin, who previously worked as a journalist for South Korea’s largest newspaper. “Without swift, coordinated action, the erosion of trust may soon exceed our capacity to restore it,” she warns. “What societies choose now will determine whether AI becomes a force for deeper deception or helps prevent the collapse of shared reality.”

As AI continues to advance, Shin’s research underscores that protecting truth has become a global public good problem requiring international cooperation and shared responsibility to establish credible information systems in the AI era.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. As an investor following the mining and metals sector, I’m concerned about the potential for AI-driven disinformation to impact commodity markets and company valuations. Careful scrutiny of digital content will be essential.

    • Good point. Investors will need to be extra vigilant in verifying the credibility of news and analysis, to avoid being misled by synthetic content.

  2. Patricia Martinez on

    This article highlights the complex challenges posed by the rise of AI-generated content. While the technology holds promise, the risks of weaponized disinformation are very real and must be addressed proactively.

    • Agreed. A global framework to authenticate online content is crucial to preserving the integrity of digital information and public discourse.

  3. Olivia Hernandez on

    The statistics on deepfake videos are quite alarming. If left unchecked, this technology could seriously undermine trust in digital media. Proactive steps are urgently needed to address this emerging threat.

    • Absolutely. Developing robust verification systems and public education campaigns will be critical to maintaining the integrity of online information.

  4. Fascinating article on the growing threat of AI-generated disinformation. It’s worrying how quickly the technology is advancing and democratizing content creation. We’ll need strong global collaboration to counter this challenge effectively.

    • Agreed. A credibility institute to authenticate online content could be a valuable solution, but it would require international cooperation and buy-in.

  5. The article highlights the growing threat of AI-powered disinformation and the need for a coordinated global response. Establishing a credibility institute could be an important step, but the details of its implementation would be critical.

    • Agreed. Any such initiative would need to be truly international in scope, with robust verification processes and the trust of governments, industry, and the public.

  6. Linda Williams on

    As someone with a keen interest in the mining industry, I’m troubled by the prospect of AI-powered misinformation campaigns targeting commodity markets and companies. Robust verification measures will be essential going forward.

    • Elizabeth Rodriguez on

      Definitely a valid concern. The mining sector’s reliance on timely, accurate information makes it particularly vulnerable to the spread of synthetic content.

  7. Jennifer Hernandez on

    The article raises some important points about the need for international collaboration to counter the threat of AI-driven disinformation. Establishing a credibility institute could be a step in the right direction, but the details would be critical.

    • Linda Y. Davis on

      Absolutely. Any such institute would need to have global reach, robust verification processes, and the trust of both governments and the public to be truly effective.

  8. As a follower of the energy and mining sectors, I’m deeply concerned about the potential impact of AI-generated misinformation on commodity markets and company valuations. Rigorous content authentication will be crucial going forward.

    • James R. Lopez on

      Well said. Investors and industry stakeholders will need to be extremely vigilant in verifying the credibility of online information to avoid being misled.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.