Listen to the article

0:00
0:00

From Digital Underground to AI: How Far-Right Extremism Leverages Technology

How can society police the global spread of online far-right extremism while still protecting free speech? That’s a question policymakers and watchdog organizations confronted as early as the 1980s and ’90s – and it hasn’t gone away.

Decades before artificial intelligence, Telegram and white nationalist Nick Fuentes’ livestreams, far-right extremists embraced the early days of home computing and the internet. These new technologies offered them a bastion of free speech and a global platform to share propaganda, spew hatred, incite violence and gain international followers like never before.

Before the digital era, far-right extremists radicalized each other primarily using print propaganda. They wrote newsletters and reprinted far-right tracts such as Adolf Hitler’s “Mein Kampf” and American neo-Nazi William Pierce’s “The Turner Diaries,” a dystopian work of fiction describing a race war. This material was mailed to supporters domestically and internationally.

Historical research shows that most neo-Nazi propaganda confiscated in Germany from the 1970s through the 1990s originated in the United States. American neo-Nazis exploited First Amendment protections to bypass German censorship laws. Once the material reached Germany, local neo-Nazis would distribute it throughout the country.

This strategy wasn’t foolproof, however. Print propaganda could get lost in the mail or be confiscated, especially when crossing international borders. Production and shipping were expensive and time-consuming for organizations that were chronically understaffed and financially strapped.

The digital revolution that began in the late 1970s promised to solve these problems. In 1981, Matt Koehl, head of the National Socialist White People’s Party in the United States, solicited donations to “Help the Party Enter The Computer Age.” American neo-Nazi Harold Covington pleaded for a printer, scanner and “serious PC” capable of running WordPerfect, noting that their “multifarious enemies already possess this technology.”

Soon, far-right extremists discovered how to connect their computers using online bulletin board systems (BBSes), a precursor to the internet. BBSes allowed users to dial in to a host computer via modem to exchange messages, documents and software.

The first far-right bulletin board system, the Aryan Nations Liberty Net, was established in 1984 by Louis Beam, a high-ranking member of the Ku Klux Klan and Aryan Nations. Beam envisioned a network where “all leaders and strategists of the patriotic movement are connected” and “any patriot in the country is able to tap into this computer at will.”

BBSes facilitated the spread of neo-Nazi computer games, which could be uploaded, downloaded, copied onto disks, and distributed widely, especially to schoolchildren. One notorious example was the German game KZ Manager, where players role-played as Nazi concentration camp commandants overseeing the murder of Jews, Sinti and Roma, and Turkish immigrants. A poll in the early 1990s found that 39% of Austrian high schoolers knew of such games and 22% had seen them.

By the mid-1990s, with the introduction of the more user-friendly World Wide Web, bulletin boards fell out of favor. The first major racial hate website on the internet, Stormfront, was founded in 1995 by American white supremacist Don Black. According to the Southern Poverty Law Center, nearly 100 murders have been linked to Stormfront users.

The digital hate landscape expanded rapidly. By 2000, German authorities had identified and banned over 300 German websites with right-wing extremist content – a tenfold increase in just four years.

In response, American white supremacists again exploited their free speech protections to bypass German censorship, offering international far-right extremists the opportunity to host websites safely and anonymously on unregulated American servers – a strategy that continues today.

The latest technological frontier being exploited by far-right extremists is artificial intelligence. These groups are utilizing AI tools to create targeted propaganda, manipulate media, and evade detection. The far-right social network Gab created a Hitler chatbot for users to interact with, while AI chatbots on other platforms have adopted extremist viewpoints. Recently, Grok, the chatbot on Elon Musk’s X platform, referred to itself as “MechaHitler,” spread antisemitic hate speech and denied the Holocaust.

Combating online hate requires comprehensive international cooperation among governments, non-governmental organizations, watchdog groups, communities and technology companies. The challenge lies in staying ahead of extremists who have consistently pioneered innovative ways to exploit technological progress and free speech protections for radicalization purposes.

As technology continues to evolve, so too does the need for more sophisticated approaches to counter extremism while preserving legitimate free expression – a balance that has proven elusive since the earliest days of digital communication.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. As a concerned citizen, I’m troubled by the prospect of AI being leveraged by extremist groups. The historical context provided here highlights the persistent threat of far-right propaganda and the need for continued vigilance and adaptive responses from society.

  2. This is certainly a concerning trend. Extremist groups have always sought to leverage new technologies for their aims, and AI is no exception. It’s critical that we remain vigilant and find effective ways to combat the spread of online radicalization while still upholding democratic values.

  3. Robert Johnson on

    The use of AI by far-right groups is a worrying development. While free speech must be protected, the spread of hateful and extremist content online poses serious risks that need to be addressed. Striking the right balance will be challenging but crucial.

    • I agree, this is a complex issue with no easy solutions. Policymakers will need to tread carefully to uphold democratic principles while also mitigating the harms of online radicalization.

  4. Michael H. Garcia on

    This is a really important issue that deserves more attention. The historical context provided here is helpful in understanding the evolution of far-right extremism and their ability to leverage new technologies. Vigilance and a measured response will be critical going forward.

  5. The historical perspective on how far-right extremists have adapted to new technologies over time is really insightful. It’s a constant battle to stay ahead of their exploitation of emerging platforms and tools. Policymakers have a tough challenge in striking the right balance.

    • William F. Garcia on

      You make a good point. Extremist groups are highly adaptable, so we need a comprehensive, multi-faceted approach to address this issue effectively.

  6. This is a concerning development that deserves serious attention. The ability of extremist groups to adapt to new technologies, like AI, poses a significant challenge. Finding the right policy solutions to address this issue will require nuance and a commitment to protecting democratic values.

  7. As someone with an interest in technology and its societal impacts, I find this topic quite concerning. The potential for misuse of AI by extremist groups is alarming and requires serious attention from policymakers, tech companies, and civil society.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.