Listen to the article

0:00
0:00

The global race for artificial intelligence supremacy has transformed from a scientific breakthrough into a complex security threat, with experts increasingly concerned about its unregulated advancement and potential for misuse.

In May 2023, AI pioneer Geoffrey Hinton left his position at Google after a decade, warning that the technology he helped develop now poses serious dangers. Hinton, a Nobel Physics Prize winner whose research on neural networks forms the foundation of modern AI, expressed alarm at how rapidly AI systems are advancing toward human-level intelligence—a milestone he once believed was decades away.

“People are already so flooded with false photos, videos, and texts that they risk losing the ability to distinguish between what is real and imaginary,” Hinton cautioned. He highlighted even greater concerns about widespread job losses and automated warfare systems.

Hinton’s warnings have been echoed by tech leaders including Elon Musk, Stuart Russell, and Apple co-founder Steve Wozniak, who joined nearly 34,000 others in signing an open letter calling for a minimum six-month pause in training advanced AI systems. The letter urged that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Despite these warnings, major technology companies have accelerated rather than slowed their AI development, driven by intense competition and the absence of binding international regulations. The transformative power of AI has been compared to humanity’s transition from hunter-gatherers to agriculture or the invention of the printing press—changes that fundamentally reshaped civilization.

The spread of AI-generated disinformation has already become a critical concern. Recent research contradicts claims that AI systems can reliably identify and correct misinformation. In fact, studies show these systems are nearly twice as likely to spread false claims about current events compared to a year ago, with 35 percent of AI-generated responses containing falsehoods.

More troublingly, AI systems increasingly provide answers even when lacking sufficient data, rather than acknowledging limitations. The percentage of unanswered queries from AI systems fell from 31 percent in August 2024 to zero—meaning these systems now generate responses regardless of whether they have accurate information.

Disinformation actors have recognized this vulnerability, flooding the internet with fabricated material through obscure websites, social media, and AI-generated content farms. AI chatbots often fail to distinguish these sources from credible outlets. The technology allows bad actors to achieve far greater reach with minimal resources compared to traditional propaganda tools.

Russia has refined these tactics over years, deploying large-scale disinformation campaigns during conflicts in Georgia in 2008 and more extensively throughout the Ukraine war. In 2013, the late Wagner Group founder Yevgeny Prigozhin established the Internet Research Agency in St. Petersburg, which has since flooded social networks with bots, trolls, and fake websites to spread Russian narratives.

According to a U.S. Congressional report from January, Russian intelligence services aim to “undermine trust in democratic institutions, exacerbate socio-political divisions, and weaken Western support for Ukraine.” Moscow’s hybrid warfare strategy includes cyber operations that can now be conducted from standard computers, reducing the need for physical presence in target countries.

German intelligence reports indicate that Russia has used messaging platforms like Telegram to recruit young, pro-Russian individuals in Germany for arson and sabotage. Both Russian and Chinese intelligence services have reportedly penetrated critical infrastructure systems, potentially allowing them to disrupt power grids and transportation networks.

In the global AI race, the United States and China maintain significant leads over other nations. China has invested heavily in AI research and infrastructure, while the U.S. has implemented trade restrictions and export controls to protect its technological advantage and limit China’s access to critical components.

The military applications of AI have already transformed modern warfare. On September 7, Russia deployed more than 800 combat drones and 13 missiles against Ukraine in a single night—the largest drone attack in history. Between 70 and 80 percent of daily combat losses on both sides of the Russia-Ukraine conflict are now caused by drones.

NATO’s preparedness for this new era of warfare has been questioned. When 19 Russian kamikaze drones entered Polish airspace on September 10, the alliance managed to shoot down only four despite deploying fighter jets and anti-aircraft systems—an expensive effort that yielded limited results.

Even more concerning is the deployment of autonomous weapons systems. In March 2020, a Turkish-made Kargu-2 drone, capable of operating autonomously to identify and kill targets, was used in the Libyan civil war—marking the first documented use of a killer robot in combat. Several countries in the Global Majority now oppose restricting or regulating such systems.

As AI development continues unabated, the most likely scenarios involve a widening technological divide between nations, the erosion of privacy and democratic freedoms, and increased surveillance capabilities that could lead to what experts call a “techno-dictatorship.” A global ban on autonomous weapons remains unlikely under current conditions, even as their potential for misuse grows more apparent.

The transformative promise of AI now comes with mounting security, ethical, and societal concerns that demand urgent attention from policymakers worldwide.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

22 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.