Listen to the article

0:00
0:00

In the shadow of global conflicts, artificial intelligence has emerged as a potent weapon in information warfare, experts warn. The proliferation of deepfake videos and manipulated media has created unprecedented challenges for those attempting to separate truth from fiction in conflict reporting.

“We were all expecting that physical, traditional combat would be complemented by disinformation warfare – we’ve seen that in previous conflicts as well,” said Professor Edson Tandoc from Nanyang Technological University (NTU). “It was just a matter of time.”

The rise of generative AI technology has dramatically lowered the barriers to creating convincing fake content. Assistant Professor Ke Ping Fan from Singapore Management University’s (SMU) computing school explained that anyone can now produce text, images, videos, or audio in multiple languages with minimal effort and technical knowledge.

“Even if the quality of deepfakes is not good enough, they can be used to fuel rumors by prompting debates,” he said. The technology has essentially democratized the ability to spread misinformation by reducing production costs and complexity.

This technological shift has created a verification challenge for the average person consuming news about global conflicts. Associate Professor Saifuddin recommends that individuals verify sources, look for technical inconsistencies such as facial glitches or unnatural lighting and audio, and cross-reference with credible news outlets.

“If a dramatic video appears only on random social accounts, that’s a red flag,” he cautioned.

However, experts acknowledge that thorough verification faces practical obstacles. With the overwhelming volume of content about conflicts flooding social media platforms, people rarely have the time or skills to verify each video they encounter.

“A better approach is to verify the source and chain of custody – asking where the video originated and whether reputable news organizations have verified it,” suggested Asst Prof Ke, noting that sophisticated deepfakes may have fewer obvious technical flaws to spot.

Professor Tandoc highlighted another troubling dynamic: confirmation bias. When faced with uncertainty, “people just rely on their biases,” he explained. “If this video supports what I believe in, then I want it to be true, then it must be true.” This psychological tendency makes combating misinformation particularly challenging during emotionally charged conflicts where people may have strong pre-existing views.

Some governments have responded to this threat with legislative measures. Dr. Carol Soon, deputy head of the National University of Singapore’s (NUS) communications and new media department, noted that countries like Singapore and Australia have enacted laws empowering authorities to act against disinformation.

While such laws can mandate content removal, Dr. Soon emphasized that many people will have already viewed and shared false information before any official intervention. She advocates for both “upstream and downstream efforts” to combat disinformation, including community outreach and timely debunking of false claims.

Simon Chesterman, Dean for NUS College and senior director of AI Governance for AI Singapore, acknowledged the limitations of legal approaches. Singapore’s Protection from Online Falsehoods and Manipulation Act can attach corrections to false information and, in severe cases, restrict access, but no legal framework can completely eliminate misinformation.

“The difficulty is that misinformation can often be corrected early, but it can rarely be entirely erased,” Chesterman said.

As AI technology continues to advance, the challenges of identifying and combating synthetic media in conflict reporting will likely intensify. The experts agreed that building public resilience and media literacy represents the most sustainable defense.

“In the end, the most durable defence is public resilience: citizens who are neither so gullible that they believe everything, nor so cynical that they don’t believe anything,” Chesterman concluded.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

20 Comments

  1. Michael Williams on

    This article underscores the challenges posed by deepfakes and manipulated media in the age of information warfare. Maintaining the integrity of news and discourse should be a top priority.

    • Absolutely. The ability of anyone to create convincing fake content is a recipe for chaos. We need to invest heavily in verification, fact-checking, and media literacy to counter this threat.

  2. Amelia Moore on

    This article highlights the critical importance of media literacy and fact-checking in an age of deepfakes and manipulated content. We must be vigilant against the rising tide of online disinformation.

    • Robert Martin on

      Well said. Maintaining trust in information sources will be an ongoing battle as AI-powered misinformation becomes more sophisticated and widespread.

  3. Isabella Lee on

    This article highlights the urgent need to address the growing threat of AI-generated misinformation. Maintaining trust in our information sources and democratic institutions will be an ongoing challenge.

    • Jennifer Lopez on

      Absolutely. The democratization of content creation via AI is a double-edged sword, empowering both legitimate expression and malicious disinformation. We must remain vigilant and develop robust verification protocols.

  4. Ava I. Martin on

    The proliferation of deepfakes and manipulated media is a worrying trend that threatens to undermine public discourse and erode trust in institutions. Strengthening media literacy and investing in robust verification protocols is essential.

    • William Davis on

      Well said. As AI-powered misinformation becomes more widespread and sophisticated, the need for effective fact-checking and digital hygiene education will only grow. Maintaining the integrity of information is vital for a healthy democracy.

  5. This article highlights the critical importance of addressing the rise of AI-generated disinformation. Maintaining public trust and the integrity of our information landscape will be an ongoing challenge that requires innovative solutions.

    • Absolutely. The democratization of content creation via AI raises serious concerns about the potential for harm through the spread of manipulated media and fake news. Strengthening media literacy and verification protocols should be a top priority.

  6. Oliver Jackson on

    This article highlights the urgent need to address the growing threat of AI-generated disinformation. Maintaining trust in our information sources and democratic institutions will be an ongoing challenge that requires innovative solutions.

    • Michael K. Thomas on

      Agreed. The democratization of content creation via AI is a double-edged sword, empowering both legitimate expression and malicious misinformation. Developing effective strategies to combat the spread of fake news will be crucial.

  7. Elijah Davis on

    The proliferation of deepfakes and manipulated media is a concerning development that threatens to undermine public discourse and trust in institutions. Strengthening media literacy and fact-checking efforts is critical.

    • Elizabeth Rodriguez on

      Well said. As AI-powered misinformation becomes more sophisticated, the need for robust verification and fact-checking processes will only increase. Maintaining the integrity of information is essential for a healthy democracy.

  8. Noah O. Moore on

    The proliferation of AI-generated fake content is a worrying development that threatens to undermine public discourse and erode trust in institutions. Robust verification protocols are urgently needed.

    • I agree. The democratization of content creation is a double-edged sword, empowering both legitimate expression and malicious disinformation. Navigating this landscape will require new tools and strategies.

  9. It’s concerning to see how AI-generated content can be weaponized to spread misinformation. We need robust verification and fact-checking processes to combat this growing threat to truth and democracy.

    • Absolutely. The democratization of content creation via AI raises serious challenges for maintaining information integrity. Stronger media literacy and digital hygiene will be key.

  10. The rise of AI-powered disinformation is a sobering development. We must remain vigilant and develop effective strategies to combat the spread of misinformation online. Verifying sources and fact-checking claims is crucial.

    • Liam Jackson on

      Well said. As this technology becomes more accessible, the potential for harm increases exponentially. Strengthening digital literacy and empowering citizens to think critically about online content will be key.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.