Listen to the article
In South Korea, authorities race to combat AI-generated election misinformation as the nation prepares for local polls on June 3. Inside the National Election Commission (NEC) headquarters in Gwacheon, teams of specialists work diligently to identify and counter increasingly sophisticated fake content circulating online.
“We can literally see how fast this technology evolves — like how each new version of AI makes videos and audio look and sound even more convincing,” says disinformation monitor Choi Ji-hee. “Our job keeps getting harder and harder.”
On a typical workday, Choi and her 18 colleagues meticulously examine social media platforms, including Instagram and YouTube, as well as online chatrooms and political fan clubs. Recent discoveries include a fabricated TV news report claiming a mayoral candidate made Time magazine’s list of rising political leaders and an AI-produced K-pop song praising one politician while mocking others.
South Korea has emerged as a global hotspot for AI adoption and related challenges. Government figures show more than 45 percent of South Koreans use generative AI, with the country boasting the highest number of paid ChatGPT subscribers outside the United States. This rapid embrace of technology has come with drawbacks—South Koreans consume more low-quality AI-generated content (often called “AI slop”) than any other nation.
The upcoming local elections mark the third major ballot since South Korea strengthened its election laws in 2023 specifically to address AI-fueled misinformation. These amendments prohibit AI-generated content involving candidates that appears realistic enough to confuse voters during the three-month period before an election.
Penalties for violations are severe. Repeat offenders or creators of particularly harmful content face up to seven years in prison or fines reaching 50 million won (approximately $34,000).
“The rules may seem excessive to those outside South Korea, especially in places like the US that highly prioritize freedom of expression,” explains Kim Myuhng-joo, director of the Korea AI Safety Institute. However, public support for these measures is strong, with a recent survey showing 75 percent of South Koreans believe AI-generated content could influence election results, and nearly 80 percent favor stronger detection and punishment efforts.
The NEC employs a multi-faceted approach to identify manipulated content. Data analyst Kim Ma-ru maps distribution patterns—tracking where, when, and by whom suspicious materials are shared—to help teams detect dubious content more efficiently. In another section, specialists use state-developed software tools that claim 92 percent accuracy in detecting AI imagery, with human experts reviewing the most sophisticated cases.
“It’s an exhausting job that can feel like a game of whack-a-mole,” Kim acknowledges. “But it’s important work—there’s a sense of civic duty in it.”
The stakes are particularly high given South Korea’s recent political turbulence. Conspiracy theories about vote-rigging have damaged public trust in elections, culminating in a dramatic incident when jailed former president Yoon Suk Yeol sent armed troops to the NEC during his short-lived attempt to impose martial law in late 2024.
The political climate has made election workers targets for harassment, with both Choi and Kim declining to be photographed due to growing threats and online bullying. Outside the commission’s office, protesters still display banners demanding investigations into allegedly rigged elections.
Digital forensic specialist Jung Hui-hun notes the alarming speed at which the situation has deteriorated: “In such a short time, it has become so difficult for voters to tell what is real and what is not.”
Despite the challenges, officials remain cautiously optimistic. “We’re still trying to figure out what is the best solution… but I think we are moving forward—slowly but surely,” Jung says.
As AI technology continues to evolve, South Korea’s aggressive approach to combating election misinformation could provide valuable lessons for other democracies facing similar challenges in preserving electoral integrity in the digital age.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
This is an important issue that highlights the need for robust legal and regulatory frameworks to address the risks of AI-enabled disinformation, especially around elections. South Korea is at the forefront of this challenge.
Kudos to the South Korean authorities for proactively addressing the challenge of AI-powered disinformation. With rising AI adoption, this is likely to be an ongoing battle that requires coordinated, multifaceted responses.
As AI capabilities advance, the potential for malicious actors to abuse the technology for electoral manipulation is becoming a major concern. South Korea’s experience underscores the urgency of this issue.
The sophistication of these AI-generated election fakes is truly alarming. South Korea’s efforts to counter the threat are commendable, but the global community will need to work together to find effective solutions.
Fascinating to see how quickly AI disinformation is evolving and posing challenges for election integrity. It’s crucial that authorities stay vigilant and ahead of the curve to combat these sophisticated fakes.
It’s concerning to see the speed at which AI-powered disinformation is evolving and the challenges it poses for election integrity. Kudos to the South Korean authorities for taking this threat seriously.
Absolutely. Staying ahead of these AI-generated fakes will require ongoing vigilance and a multifaceted approach from election officials, tech platforms, and the public.
The South Korean government’s efforts to tackle AI-generated election misinformation are really important. With the high adoption of generative AI, this is likely a growing issue globally that needs proactive solutions.
Agreed. The fact that over 45% of South Koreans use generative AI makes this a major concern for the upcoming local elections. Rigorous fact-checking and public education will be key.