Listen to the article
In times of geopolitical tension, misinformation spreads rapidly, experts warn that AI is making the problem worse. The generative AI market is expected to grow by 560% between 2025 and 2031, reaching a staggering $442 billion, according to recent data. This growth coincides with alarming trends in fraud, with 46% of experts reporting encounters with synthetic identity fraud, 37% observing voice deepfakes, and 29% documenting video deepfake incidents.
Maher Yamout, Lead Security Researcher at Kaspersky, explains that periods of conflict typically trigger increased cyber threats across multiple fronts. “State-sponsored cyber operations—including advanced persistent threat groups and hacktivists—often become more active, targeting government networks, critical infrastructure, and high-value industrial or financial systems to conduct cyber espionage or cause disruption,” Yamout notes.
The instability created during these periods provides fertile ground for cybercriminals looking to capitalize on heightened public anxiety. These actors frequently launch conflict-themed phishing campaigns and scams designed to steal sensitive information such as login credentials and financial details.
Artificial intelligence has dramatically amplified these threats, according to Yamout. “AI tools enable attackers to quickly generate highly convincing phishing content across multiple formats, including fraudulent emails, SMS and messaging app scams, voice-based scams using AI-based voice cloning, fake customer-support calls, and deceptive social media messages.”
What makes the current situation particularly concerning is the accessibility of these tools. “Generative AI is dramatically increasing the scale and sophistication of scams and fake news because these tools are now accessible to a wide audience, including cybercriminals,” Yamout explains. “AI enables attackers to quickly craft grammatically perfect phishing emails, create convincing fake websites, and generate deepfake audio or video to impersonate trusted individuals.”
This democratization of technology means even less technically skilled attackers can launch highly effective campaigns, significantly amplifying the reach and potential financial and reputational damage of these activities. The technology allows scammers to personalize attacks at scale, making them more believable and harder to detect.
The growing prevalence of fake news articles, clickbait headlines, and AI-generated images, audio, and videos can manipulate public opinion and spread misleading narratives, often driving traffic to fraudulent websites. As these tools become more sophisticated, the ability to distinguish between authentic and manipulated content becomes increasingly difficult.
Yamout emphasizes that both individuals and organizations must adopt a cautious, structured approach when assessing online information, particularly during periods of heightened geopolitical tension when misinformation campaigns intensify.
“Verifying content before sharing it can significantly reduce the spread of false or manipulated information,” he advises. Users should check the source and author of content, cross-verify claims with multiple reputable outlets, look for signs of manipulation, verify the authenticity of images and videos, use reliable security solutions, and practice strong digital literacy.
For governments, collaboration with the private sector is essential to ensure accountability and disseminate accurate information. This includes enhancing citizen digital literacy, strengthening cybersecurity solutions, working closely with technology platforms, and launching public awareness campaigns.
Media organizations and social media platforms also play a crucial role in limiting the spread of misinformation. “By collaborating with trusted cybersecurity partners and leveraging expertise in content, technology, and threat intelligence, governments and organizations can better detect and mitigate malicious campaigns,” Yamout explains.
Threat intelligence sharing is particularly important, as coordinated efforts can help track emerging scam tactics, identify malicious domains, and detect AI-generated content used in fraudulent campaigns. Journalists and platform teams should be equipped with the skills needed to recognize and respond to scams and misinformation, supported by staff training, rapid reporting mechanisms, and automated monitoring systems.
As the world navigates increasingly complex geopolitical tensions, the battle against AI-enhanced misinformation will require a coordinated approach involving individuals, organizations, governments, and technology platforms to effectively combat the growing sophistication of these threats.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
The rapid growth of the generative AI market is a double-edged sword. While these technologies hold immense potential, the risks of misuse and exploitation are also increasing exponentially. Careful regulation and ethical frameworks are needed to manage this.
This article underscores the importance of maintaining strong cybersecurity practices, even during times of heightened geopolitical tensions. State-sponsored actors and hacktivists can pose significant threats to critical infrastructure and sensitive data.
Cybercriminals are always quick to exploit these kinds of situations. It’s critical that people remain vigilant against phishing and other scams that try to capitalize on public anxiety. Fact-checking is more important than ever.
You’re right, we all have a role to play in combating these threats. Individual awareness and media literacy are key to stopping the spread of misinformation.
The statistics around synthetic identity fraud, voice deepfakes, and video deepfakes are alarming. Cybercriminals are clearly leveraging these advanced capabilities to launch more sophisticated attacks. Proactive defense measures are crucial.
This is a concerning trend. AI-driven misinformation can be incredibly damaging, especially during times of crisis and conflict. We need stronger safeguards and transparency around these technologies to protect the public.
This article highlights the complex interplay between geopolitics, cybersecurity, and emerging technologies. It’s a sobering reminder that we must approach the development of AI and other powerful tools with great caution and responsibility.
Periods of conflict are always ripe for the spread of misinformation and disinformation. It’s worrying to see how AI is exacerbating this problem. Strengthening digital literacy and critical thinking skills should be a top priority.
Absolutely. We need to empower people to identify and resist manipulative content, whether it’s coming from human or AI sources. Education is key to building societal resilience.
The growth of the generative AI market is certainly concerning from a security perspective. As these technologies become more advanced, the potential for abuse and manipulation only increases. Rigorous regulation and oversight will be crucial.
Agreed. Policymakers need to stay ahead of these rapidly evolving technologies to ensure they’re not exploited for malicious purposes. Proactive measures are essential.