Listen to the article
Artificial intelligence-generated disinformation emerged as a significant concern during Japan’s recent general election, with fabricated videos and images circulating widely across social media platforms and messaging apps.
Election officials and media watchdogs noted a marked increase in the sophistication and volume of AI-generated content compared to previous electoral cycles. These fabricated materials included deepfake videos of candidates making inflammatory statements they never uttered and digitally altered images showing politicians at controversial events they never attended.
“What we’re seeing represents a new frontier in electoral manipulation,” said political analyst Takeshi Yamamoto from Waseda University. “The technology has advanced to the point where even relatively skilled observers may have difficulty distinguishing genuine content from sophisticated fakes.”
According to NHK World’s Junya Yabuuchi, who has been tracking this phenomenon, the motivations behind creating and spreading such disinformation vary widely. Some actors appear politically motivated, seeking to damage specific candidates or parties, while others operate from financial incentives, generating controversial content that attracts clicks and engagement.
“There’s also evidence suggesting some disinformation campaigns are designed simply to undermine trust in the electoral process itself,” Yabuuchi explained. “The goal isn’t always to promote a particular candidate but to create general confusion and skepticism about all political information.”
Japan’s experience mirrors a growing global trend. Electoral authorities worldwide have struggled to adapt to the rapid evolution of AI technologies like stable diffusion models and large language models, which have dramatically lowered the barriers to creating convincing fake content.
The Electoral Management Committee of Japan reported identifying over 200 instances of AI-generated disinformation during the three-week campaign period, a figure they believe represents only a fraction of the total volume in circulation.
Technology companies have implemented measures to combat the spread of false information, including content labeling systems and detection algorithms. However, these efforts have proven only partially effective, particularly as disinformation often migrates to less regulated platforms or private messaging services like LINE, which is widely used in Japan.
Media literacy experts emphasize the importance of public awareness as a critical defense against disinformation. Professor Naoko Tanaka from Tokyo University’s Center for Digital Society Studies has been leading workshops teaching voters how to identify potentially manipulated content.
“We encourage people to verify information through multiple sources before sharing it,” Tanaka said. “Simple practices like checking whether news appears on established media outlets, looking for unusual visual artifacts in images, or questioning content that provokes strong emotional reactions can help limit the spread of disinformation.”
Political parties across Japan’s spectrum have expressed concern about the trend. The ruling Liberal Democratic Party and opposition Constitutional Democratic Party both reported being targeted by fake content during the campaign.
International observers note that Japan’s experience offers valuable lessons for other democracies. The Organization for Economic Cooperation and Development (OECD) has included Japan’s case in its ongoing study of digital threats to electoral integrity.
Looking ahead, Japanese lawmakers are considering legislative responses to address AI-generated disinformation. Potential measures include requiring clearer labeling of AI-generated content and establishing faster mechanisms for removing demonstrably false election-related information.
Cybersecurity experts warn that the technology will continue to advance, making detection increasingly difficult. “What we’re witnessing is likely just the beginning,” said Hiroshi Nakamura of the Japan Cybersecurity Innovation Hub. “As we develop better detection tools, those creating disinformation will adapt their techniques.”
The challenge extends beyond technology alone. Building societal resilience through education and institutional trust may ultimately prove more effective than technological solutions in preserving electoral integrity in the AI era.
As Japan processes the lessons from this election cycle, the implications extend far beyond its borders, offering insights for democracies worldwide grappling with similar challenges at the intersection of technology, information, and democratic processes.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

