Listen to the article
Fake Election Content Surges Online Ahead of Japan’s House Vote
False photos and videos related to Japan’s upcoming House of Representatives election on February 8 are proliferating across social media platforms, raising concerns about voter manipulation and misinformation. Experts attribute the surge to advancements in generative AI technology, which has made creating sophisticated fake content increasingly accessible.
In one notable case from January 20, just days before the dissolution of the lower house, a social media user on X (formerly Twitter) shared an image showing a prospective candidate in a tank top waving on a snowy street with the caption “Energetic despite the cold.” The image contained a marker indicating it was AI-generated. Investigation revealed the image was an altered version of the candidate’s authentic photo posted earlier that day, in which they wore a jacket.
Such manipulations represent a growing trend in election-related disinformation. Rather than creating entirely fictional content, bad actors are now modifying authentic media to convey misleading messages or suggest false contexts.
Yuichiro Tamaki, leader of the Democratic Party for the People (DPFP), has become a prominent target of such disinformation. A widely circulated YouTube video falsely claimed Tamaki had been dismissed from his position, had his assets seized, and would face exile from Japan. Tamaki publicly refuted these baseless claims on X, stating he was considering legal action for defamation.
The newly formed Centrist Reform Alliance (CRA), created through a merger of the Constitutional Democratic Party of Japan and Komeito, has also faced coordinated disinformation campaigns. In one instance, a photo showing former party leaders Yoshihiko Noda and Tetsuo Saito holding a board with the new party logo was altered to replace the legitimate logo with a Chinese party-style emblem. Accompanying posts claiming “it looks like a Chinese organization” garnered over 6.7 million views.
The CRA has issued warnings about such manipulated images and threatened legal action against those spreading false information. The problem has become so widespread that Japan’s Ministry of Internal Affairs and Communications has officially requested social media operators to implement measures for the swift removal of misleading election content.
Daisuke Furuta, editor-in-chief of the Japan Fact-check Center, advises voters to critically evaluate information using three key questions: “Who is the source?”, “What is the basis for the statement?”, and “How does it compare with related information?”
When assessing sources, Furuta recommends verifying whether the individual is positioned to possess such information and checking their track record for reliability. For statements, he stresses the importance of clearly indicated reliable sources and accurate quotations. He notes instances where Japanese subtitles on videos of foreign figures completely misrepresented their actual statements.
Cross-referencing information with established news organizations and public institutions is another essential verification method. Furuta points out that legitimate news is rarely known by only one entity, making comparison across multiple credible sources vital.
The advancements in AI technology have democratized image manipulation, allowing individuals without specialized knowledge to create convincing fakes quickly. Despite this, some AI-generated content contains identifying markers or subtle distortions in Japanese text that careful observers can detect.
“There is a vast amount of incorrect and unreliable information in the world,” Furuta emphasizes. “Just as you would thoroughly research a restaurant or travel destination before going, you should carefully research information before voting in the House of Representatives election, which only occurs once every few years.”
As Japan approaches this critical election, the challenge of distinguishing fact from fiction has never been more important—or more difficult—for voters seeking to make informed decisions at the ballot box.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Modifying authentic media to create false narratives is a sophisticated tactic. Voters must learn to spot the signs of AI-generated or digitally altered content.
Agreed. Increased media literacy and fact-checking efforts will be crucial to combat this type of misinformation and protect the democratic process.
This is a concerning trend. Deepfake technology can be used to manipulate media and mislead voters. It’s important to rely on authoritative and verified sources when assessing election information.
Agreed. Fact-checking and digital forensics will be crucial to combat this kind of misinformation. Voters should be cautious and seek out reliable news coverage.
The use of AI-generated fake content in elections is extremely worrying. Voters need to be vigilant and cross-check information from multiple sources to avoid being misled.
Absolutely. Disinformation can undermine the integrity of the democratic process. Election officials and tech platforms must work together to address this threat.
This is a timely warning about the dangers of manipulated media in elections. Voters should be skeptical of content that seems too good or too outrageous to be true.
The proliferation of fake election content is a serious issue that undermines trust in the democratic process. Robust safeguards and transparency are needed to address this challenge.