Listen to the article
Tokyo residents have expressed significant concern over the emergence of AI-generated disinformation targeting China, following an exposé by the Japanese newspaper Asahi Shimbun that revealed an organized effort to flood Japan’s social media with fabricated anti-China content.
The investigative report uncovered a systematic operation where creators on CrowdWorks, a Japanese online staffing agency, were specifically recruited to produce fictional narratives depicting Chinese individuals engaging in disruptive behavior. These videos were designed to inflame tensions between the two neighboring countries.
The revelation has sparked alarm across age groups in Tokyo, with citizens worried about how easily such content could manipulate public opinion and damage international relations.
“The fact that a Japanese person asked a company to make an AI video about China in a bad way is truly unbelievable and phenomenal,” said Miyuka Tsuchiya, a junior high school student. “As a Japanese student, when my friends and I watch this video, since AI is truly believable and realistic, we might believe it. And in fact, not only us, but even older people might believe this, and this could cause troubles within countries that weren’t meant to happen.”
The concerning aspect of these AI-generated videos is their potential to appear authentic, making them particularly dangerous in shaping perceptions among those with limited exposure to China or its citizens.
Seiya Koyama, a working Tokyo resident, echoed these concerns: “People casually browsing YouTube may encounter such content unexpectedly. For people who know nothing about China, or really about any other country, I think if that is the only information they have, they may end up feeling afraid of that country.”
Koyama, who has traveled extensively to China and has Chinese friends, added that he personally avoids such content, recognizing its harmful nature.
University student Sota Sakaguchi offered perspective on the generational aspects of anti-China sentiment in Japan: “An anti-China sentiment persists among older generations, and I do recognize that there is an anti-China bias. But among younger people, there are quite a few Chinese international students at universities and so on, so there is not really much of an anti-China atmosphere.”
Sakaguchi also suggested the videos might be driven by market demand rather than purely ideological motives, noting that there appears to be enough audience interest to make such content financially viable.
The Asahi Shimbun’s report included interviews with content creators involved in the scheme. One former civil servant admitted to producing anti-China videos purely for financial gain, despite having never visited China or interacted with Chinese people. Another part-time creator revealed they initially made positive content about Japan before noticing a significant increase in orders specifically requesting negative narratives about China.
Media experts interviewed for the report pointed to a troubling economic incentive driving the phenomenon: content that generates strong negative emotions typically attracts more engagement and therefore more revenue in today’s attention-driven digital economy.
The experts warned that using AI technology to mass-produce misleading videos represents a dangerous evolution in propaganda techniques that could systematically stigmatize another nation. The long-term consequence could be a deterioration in Japanese society’s understanding of China, further complicating already strained bilateral relations between the two economic powerhouses.
The case highlights the emerging global challenge of combating AI-generated disinformation and raises urgent questions about platform responsibility, media literacy, and international cooperation in addressing this technological threat to peaceful relations between nations.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
This is deeply concerning. AI-generated disinformation is a serious threat that can damage international relations and social cohesion. We need stronger safeguards and transparency around the use of AI in media and content creation.
Manipulating public opinion through fabricated narratives is highly unethical. I’m glad the Japanese public is aware of this issue and I hope more can be done to combat the spread of such disinformation, both domestically and internationally.
As someone interested in the mining and commodities space, I’m concerned about the potential for disinformation to skew public opinion and policy decisions. We need robust fact-checking and digital literacy efforts to combat these threats.
I’m curious to learn more about the specifics of this operation – who was behind it, what were their motivations, and how can we prevent similar tactics from being used in the future? Transparency and accountability are essential.
As a mining and commodities enthusiast, I’m worried about how this kind of disinformation could impact public perception and policy around critical industries like mining and energy. We need to ensure decisions are based on facts, not fiction.
Agreed. Disinformation in these sectors could have serious economic and geopolitical consequences. Maintaining public trust and fact-based policymaking is crucial.
This is a troubling development. AI-generated content has the potential to be extremely convincing and misleading. Safeguards and media literacy education will be key to mitigating the risks of such disinformation campaigns.