Listen to the article

0:00
0:00

Recent public polling in Japan reveals growing concerns among citizens about the potential for AI-generated videos to spread disinformation about China, underscoring broader anxieties about artificial intelligence’s role in international relations.

A survey conducted last month by the Tokyo Institute for Public Opinion Research found that 68 percent of Japanese respondents expressed worry about the proliferation of deepfake videos specifically targeting China-Japan relations. The poll, which sampled 1,500 adults across Japan’s major metropolitan areas, highlights how technological advancements in AI are creating new diplomatic challenges in East Asia.

“The ability to create convincing fake videos has evolved dramatically in just the past year,” said Dr. Kenji Tanaka, a digital media researcher at Waseda University who analyzed the survey results. “What’s particularly concerning is how these technologies could be weaponized to inflame existing tensions between neighboring countries.”

The concerns come amid already strained relations between Japan and China, which have faced diplomatic challenges over territorial disputes in the East China Sea and disagreements over historical issues. Experts warn that AI-generated content could exacerbate these tensions by creating false narratives that appear authentic to viewers.

Japan’s Ministry of Internal Affairs and Communications has taken note of these concerns, recently establishing a task force to monitor the spread of AI-generated content that could potentially harm international relations. The group includes representatives from technology firms, academic institutions, and government agencies.

“We’re seeing sophisticated AI tools become increasingly accessible to the general public,” said Yoshiko Hara, a ministry spokesperson. “This democratization of technology brings benefits but also significant risks if misused to create convincing disinformation.”

The survey indicated that older Japanese citizens expressed the most concern, with 76 percent of respondents over 60 saying they were “very worried” about AI-generated disinformation targeting China. Among respondents aged 18-29, the concern was present but less pronounced at 54 percent.

Technology experts note that Japan’s concerns mirror global trends, as countries worldwide grapple with how to regulate and respond to increasingly sophisticated AI-generated media. The European Union has recently proposed regulations requiring clear labeling of AI-generated content, while the United States continues to debate similar measures.

“What makes this particularly challenging in the East Asian context is the complex history between these nations,” explained Dr. Hiroshi Nakamura, professor of international relations at Tokyo University. “There’s fertile ground for disinformation to take root when historical wounds remain unhealed.”

Japanese media outlets have reported several incidents in recent months where manipulated videos appearing to show Chinese officials making provocative statements went viral before being debunked. These incidents have prompted calls from civil society organizations for greater media literacy education.

The Japan AI Ethics Association, a nonprofit organization promoting responsible AI use, has launched a nationwide campaign to help citizens identify AI-generated content. “The technology to create these videos is evolving faster than our ability to detect them,” said Tomoko Ishida, the association’s director. “Education is our first line of defense.”

Japanese and Chinese diplomatic channels have quietly begun discussing cooperation on combating cross-border disinformation, according to sources familiar with the talks. These discussions represent a rare area of potential collaboration amid otherwise frosty relations.

Technology companies operating in Japan have also responded to public concerns. LINE Corporation, which operates one of Japan’s most popular messaging platforms, announced last week new features to help identify potentially manipulated media shared through its service.

As the technology continues to advance, the Japanese government is expected to introduce legislation later this year establishing penalties for creating and distributing harmful deepfake content. The proposed law would be among the first of its kind in Asia.

“What we’re witnessing is just the beginning,” warned Dr. Tanaka. “As these AI tools become more sophisticated and accessible, the potential for misuse grows exponentially. The time to establish guardrails is now, before the technology outpaces our ability to contain its harmful effects.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

7 Comments

  1. Jennifer Brown on

    The survey results underscore the need for robust digital media literacy and fact-checking initiatives to help the public identify and resist AI-generated propaganda. Strengthening international cooperation on this issue could also be an important step.

    • Amelia T. Johnson on

      You’re right, global coordination will be key to addressing this transnational challenge. Sharing best practices and developing common standards could be productive avenues to explore.

  2. Ava Q. White on

    Deepfakes and other AI-powered disinformation pose serious threats to stability and trust between nations. Maintaining an objective, evidence-based dialogue is critical, even as the technological landscape evolves rapidly.

    • Well said. Vigilance and a commitment to facts over fiction will be essential going forward. Governments, tech companies, and civil society all have important roles to play in this effort.

  3. Michael Garcia on

    It’s concerning to see Japan’s citizens express such worries about AI being used to stoke tensions with China. This speaks to the urgent need for robust regulations and ethical frameworks to govern the development and deployment of these powerful technologies.

  4. Michael Lopez on

    Interesting to see public concern in Japan about AI-generated disinformation targeting China. This highlights the growing challenge of combating AI-powered manipulation in international relations. Careful monitoring and proactive measures will be crucial to mitigate these risks.

  5. The survey results underscore the geopolitical implications of AI-generated disinformation. As these technologies advance, policymakers will face increasing pressure to address their potential weaponization in international conflicts. A proactive, multilateral approach seems prudent.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.