Listen to the article

0:00
0:00

In an airy office in Gwacheon, South Korea, a team of dedicated workers meticulously scans social media platforms for AI-generated content that could undermine the integrity of upcoming local elections. The increasing sophistication of artificial intelligence is putting South Korea’s recently toughened election laws to the test as the June 3 polls approach.

“We can literally see how fast this technology evolves – like how each new version of AI makes videos and audio look and sound even more convincing,” says Choi Ji-hee, a disinformation monitor at the National Election Commission (NEC). “Our job keeps getting harder and harder.”

South Korea faces particular challenges in this domain as one of the world’s most digitally connected societies. The country has embraced AI at an exceptional rate, with over 45 percent of South Koreans using generative AI tools. According to OpenAI, South Korea has more ChatGPT paid subscribers than any other country outside the United States.

The rapid adoption of this technology has created fertile ground for election-related disinformation. On a typical workday, Choi and her 18 colleagues methodically examine Instagram, YouTube, online chatrooms, and “fan clubs” for local politicians, searching for AI-fabricated content. Their recent discoveries include a counterfeit TV news report falsely claiming a mayoral candidate had been featured in Time magazine, and a sophisticated AI-produced K-pop song praising a politician while mocking opponents.

In 2023, the South Korean government strengthened laws specifically targeting AI misuse in elections and hired hundreds of staff to monitor and counter manipulated content. The amended legislation prohibits AI material involving candidates that appears realistic enough to confuse voters during the three months before an election.

The penalties for violations are severe. Repeat offenders or creators of particularly harmful content can face up to seven years in imprisonment or fines of up to 50 million won (approximately $43,600).

“It’s an exhausting job that can feel like a game of whack-a-mole,” says data analyst Kim Ma-ru, who maps distribution patterns of fake materials to help the team detect suspicious content more efficiently. “But it’s important work – there’s a sense of civic duty in it.”

The upcoming local elections will be the third major ballot since the new legislation was enacted. During this period, reports of false AI-created content have surged dramatically – increasing 27-fold between the general election in 2024 and the following year’s presidential campaign.

Beyond fabricated content targeting candidates, conspiracy theories about election fraud have damaged public trust in recent years. The situation grew particularly tense when former president Yoon Suk Yeol dispatched armed troops to the NEC during his attempted martial law imposition in 2024, citing widely disproven claims of vote hacking.

The contentious political climate has created safety concerns for election monitors. Both Choi and Kim declined to be photographed or filmed, citing increasing threats and online harassment targeting election workers. Outside their office, protesters supporting the former president have displayed banners demanding investigations into alleged rigged elections.

At the NEC’s cyber investigations unit, digital forensic specialist Jung Hui-hun uses state-developed software to detect AI imagery. “In such a short time, it has become so difficult for voters to tell what is real and what is not,” he says. Officials report that their detection programs achieve about 92 percent accuracy, with human experts reviewing the most sophisticated materials.

Dr. Kim Myuhng-joo, director of the Korea AI Safety Institute, acknowledges that South Korea’s stringent rules might seem excessive to outsiders, particularly in countries like the United States that prioritize freedom of expression. However, he notes that South Koreans’ rapid embrace of AI was accompanied by awareness of its potential dangers, citing election conspiracy theories and a public scandal involving deepfake pornography targeting women and girls.

“Public consensus has formed that we need tough regulations over the use of AI when it comes to election transparency,” Dr. Kim explains. A survey last year indicated that 75 percent of South Koreans believe AI-generated content could influence election results, with nearly 80 percent supporting stronger detection and punishment efforts.

Despite these challenges, Jung remains cautiously optimistic about South Korea’s evolving approach to combating AI-fueled disinformation. “We’re still trying to figure out what is the best solution… but I think we are moving forward – slowly but surely.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. As someone with a background in the mining and commodities sector, I’m particularly concerned about the potential for AI-generated disinformation to disrupt these industries. Misinformation around critical minerals, energy sources, and related equities could have severe economic consequences if not properly addressed.

    • John U. Martin on

      That’s a very valid point. The mining and energy industries are already subject to volatility and scrutiny – the added threat of AI-powered disinformation campaigns could exacerbate these challenges significantly. Proactive measures to identify and counter such misinformation will be essential for protecting these critical sectors.

  2. Michael Thomas on

    This is a fascinating and concerning development. The increasing sophistication of AI-generated content is truly impressive, but the potential for abuse in the form of election-related disinformation is deeply troubling. I’ll be closely following how South Korea and other democracies respond to this evolving challenge.

    • Patricia Miller on

      Absolutely. The rapid pace of technological change is outpacing many existing legal and regulatory frameworks. Addressing AI-driven disinformation will require nimble, adaptive, and collaborative approaches from governments, tech companies, and civil society.

  3. Emma Martinez on

    As someone who follows the mining and commodities space, I’m curious to see if this issue extends beyond just elections. Disinformation around critical minerals, energy sources, and related equities could have significant economic impacts if left unchecked.

    • William Martinez on

      That’s a great point. Disinformation targeting sensitive industries like mining and energy could disrupt supply chains, markets, and investor confidence if not properly addressed. Vigilance and cross-sector collaboration will be essential.

  4. Linda Johnson on

    This is a complex issue with no easy solutions. On one hand, we want to protect the integrity of elections and the free flow of information. On the other, the rise of AI-generated disinformation poses real threats to democracy. I’m curious to see how South Korea navigates this delicate balance.

    • Oliver Thomas on

      Agreed. It’s a challenging situation that will require innovative and nuanced approaches. Striking the right balance between free speech and election security is crucial, and South Korea’s response will be closely watched.

  5. Lucas Davis on

    As someone with an interest in mining and commodities, I’m curious to see if this disinformation issue extends to topics like resource extraction and energy. Misinformation around critical minerals and energy sources could have serious economic and geopolitical implications.

    • Linda Thomas on

      That’s an insightful point. Disinformation targeting sensitive industries like mining and energy could disrupt supply chains and markets if left unchecked. Vigilance will be key to protecting these sectors.

  6. Olivia Jones on

    As someone with a keen interest in the mining and commodities sector, I’m concerned about the potential for AI-generated disinformation to disrupt these industries. Misinformation around critical minerals, energy sources, and related equities could have serious economic implications if not addressed properly.

    • Isabella Jones on

      That’s a really valid point. The mining and energy sectors are already subject to a lot of scrutiny and volatility – the added threat of AI-powered disinformation campaigns could exacerbate these challenges. Proactive measures to identify and counter such misinformation will be essential.

  7. Elijah Hernandez on

    This is a concerning issue that South Korea will have to navigate carefully. The rise of AI-generated disinformation presents real challenges for election integrity, especially in a highly digitally connected society like South Korea. It will be interesting to see how the government and election authorities respond to this evolving threat.

    • William Williams on

      You’re right, it’s a complex problem with no easy solutions. Balancing free speech and combating false information is a delicate balance that all democracies must grapple with.

  8. Elijah Johnson on

    The rapid adoption of AI in South Korea is definitely a double-edged sword. While the technology has many beneficial applications, the potential for abuse in the form of election-related disinformation is concerning. This is a challenge that will require innovative solutions from policymakers and tech companies.

    • Lucas Williams on

      Absolutely. Striking the right balance between technological progress and democratic integrity is crucial. I hope South Korea can find effective ways to mitigate the risks of AI-generated disinformation without stifling innovation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.