Listen to the article
As tensions escalate across multiple fronts in the Middle East, a parallel battle is unfolding in the digital realm, where distinguishing fact from fiction has become increasingly challenging amid a flood of information and misinformation circulating online.
Social media platforms have emerged as primary sources for breaking news about alleged attacks and military developments. However, cybersecurity experts warn that alongside legitimate information, manipulated videos and deepfakes present a growing threat to public understanding during periods of geopolitical instability.
Maher Yamout, Lead Security Researcher at Kaspersky, told Asharq Al-Awsat that the ability to identify reliable information becomes particularly crucial during emergencies, when heightened emotions lead people to share content without verification.
“With developments unfolding in the Middle East, government authorities in Gulf Cooperation Council countries have warned against publishing or circulating information from unknown sources,” Yamout explained. “Fake news becomes more dangerous during emergencies.”
While misinformation is not a new phenomenon, its reach and velocity have transformed dramatically with the rise of social media and artificial intelligence tools. During conflicts, unverified reports or manipulated videos can reach millions within minutes, outpacing fact-checkers and sowing confusion.
Experts categorize fake news into two primary types: completely fabricated content designed to influence opinion or drive traffic, and partially accurate information that misrepresents reality through exaggeration or lack of proper verification. Both types can confuse audiences, especially when users rely on social media rather than established news organizations for updates.
The emergence of deepfake technologies has added a sophisticated new dimension to the problem. These AI-powered techniques can create highly convincing fabricated videos through face swapping or synthetic visual generation, making it possible to present events that never occurred with disturbing realism.
“Artificial intelligence makes it possible to combine different video clips to produce new scenes showing events or actions that never happened in reality, often with highly realistic results,” Yamout noted. Even when later debunked, such content can trigger immediate confusion or anxiety.
Several governments have warned that sharing inaccurate information, even unknowingly, may expose users to legal consequences. This highlights the growing importance of digital literacy and responsibility when consuming and sharing information during sensitive periods.
Cybersecurity experts emphasize that individual users play a crucial role in stemming the spread of misinformation. They recommend several verification strategies: checking source credibility by examining website domains for irregularities; verifying author identity and expertise; comparing reports across multiple reputable outlets; and examining dates to ensure content is not recycling old events as current news.
The emotional component of misinformation presents another challenge. “Many fake news stories are written in a clever way to provoke strong emotional reactions,” Yamout explained. Content designed to trigger fear, anger, or shock is more likely to be shared quickly without scrutiny.
This dynamic is amplified by social media algorithms that promote high-engagement content, allowing emotionally charged posts to spread faster than balanced reporting. Social platforms can also create “echo chambers” that reinforce existing beliefs, making it essential to seek diverse information sources.
Visual manipulation often leaves detectable traces. Edited images may contain distorted background lines, unnatural shadows, or inconsistent lighting. While these signs can be difficult to spot, particularly on mobile devices, they can provide clues about authenticity.
Experts agree that addressing misinformation during crises requires coordinated efforts among governments, technology companies, media organizations, and users. Yamout offers a straightforward guideline: “If you are not sure the content is accurate, do not share it.”
As digital platforms continue to shape how information flows across borders during conflicts, critical evaluation skills have become essential for navigating the complex information landscape. The challenge extends beyond cybersecurity to protecting the integrity of information itself during periods when clarity is most needed.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
Discerning truth from deepfakes is becoming a critical challenge, especially in times of crisis and heightened emotions. It’s crucial that we remain vigilant and verify information from credible sources to avoid spreading misinformation.
With the rise of social media, it’s alarming how quickly misinformation can spread, especially around sensitive topics like military developments. We must be cautious and rely on authoritative, fact-based reporting to stay informed.
Deepfakes pose a real challenge to public discourse, especially during times of crisis and uncertainty. Verifying the authenticity of information and relying on reputable news sources is essential to navigate this complex landscape.
Absolutely. Developing robust detection tools and digital literacy campaigns will be crucial to empower people to identify manipulated content.
This is a timely and concerning issue. The ability to create highly realistic deepfakes is a serious threat to public understanding, particularly during periods of geopolitical instability. Maintaining a critical eye and fact-checking will be essential.
The proliferation of deepfakes is a worrying development that threatens to undermine our understanding of current events. Maintaining a critical eye and cross-checking information from credible sources is more important than ever.
This is a timely and concerning issue. Deepfakes pose a serious threat to public understanding, particularly during volatile geopolitical events. Fact-checking and source verification will be essential to combat the spread of false narratives.
Agreed. Governments and tech companies must work together to develop robust detection methods and empower users to identify manipulated content.
This is a concerning trend. The ability to create realistic deepfakes erodes public trust and makes it increasingly difficult to separate truth from fiction. Strengthening digital literacy and media verification skills will be crucial going forward.