Listen to the article
Chinese companies are exploiting a new AI manipulation tactic known as generative engine optimization (GEO) to distort search results and spread misinformation for commercial gain, according to an exposé aired during China’s annual Consumer Rights Gala by state broadcaster CCTV.
The investigation, released on March 15, revealed that GEO services are widely available on major e-commerce platforms including Taobao and JD.com, with three-month subscription packages ranging from 3,600 yuan ($520) to 32,800 yuan ($4,765).
GEO emerged in response to the integration of artificial intelligence into search engines, offering businesses a way to enhance visibility of their products in AI-generated search results and responses from large language models. The technique works by systematically feeding AI models—such as DeepSeek, Doubao, and Kimi—with large volumes of content that is subsequently indexed and prioritized when users make related queries.
While the technology was ostensibly developed as a legitimate marketing tool, it has rapidly evolved into a vehicle for spreading misinformation. One GEO service provider, identified only by his surname Wang, told CCTV that his company had served more than 200 clients across various industries within just one year. The company’s marketing pitch promised clients top-three placement in AI search results across any platform.
Wang explained that maintaining visibility requires continuous feeding of client-related content to AI models, as the underlying algorithms constantly evolve and update.
To demonstrate GEO’s alarming capabilities, an industry insider purchased a software called the “Liqing GEO Optimization System” and created a completely fictional product—a smartwatch branded as “Apollo-9.” After inputting fabricated specifications and features, the software automatically generated numerous promotional articles with fake author credentials and published them across the insider’s social media accounts.
The results were immediate and concerning. Within two hours, a major AI model cited these fabricated articles when asked about the Apollo-9, describing non-existent health monitoring features and actively recommending the fictional product to users.
Taking the experiment further, the insider published 11 additional articles over the next three days, including fake expert reviews and industry rankings. Subsequently, when prompted for health wristband recommendations, at least two different AI models listed the fictional Apollo-9 among their top suggestions.
Li, the founder of the Liqing GEO system, acknowledged the ethical issues but highlighted GEO’s popularity among businesses seeking competitive advantages. “Every business loves it,” he admitted. “They all hope others won’t engage in ‘AI poisoning,’ even as they themselves do it.”
The GEO industry’s economic impact extends beyond client companies. Li revealed that the manipulation process relies heavily on publishing optimized content on specific websites. These publishing platforms, which previously struggled financially, now process hundreds of articles daily—each generating dozens of yuan in revenue.
China’s regulatory bodies have begun acknowledging the problem. The State Administration for Market Regulation recently identified “AI-generated advertising” as a major challenge in its regulatory work priorities for online advertising oversight. However, specific regulations targeting GEO practices have yet to be implemented.
In the wake of the Consumer Rights Gala broadcast, several GEO companies have publicly denounced “brainwashing” AI and pledged to reduce misinformation in their services. Whether these promises materialize into meaningful changes remains to be seen.
This emerging manipulation tactic presents a significant challenge to the reliability of AI-generated information, potentially undermining consumer trust in both AI systems and the products they recommend. As AI continues to influence consumer decision-making, the need for robust safeguards against such manipulation becomes increasingly urgent.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
The revelations about GEO services manipulating search results in China are deeply concerning. This is a prime example of how AI can be exploited to undermine the integrity of online information, with potentially serious consequences for consumers and society.
I agree, this is a significant challenge that policymakers must address. The integration of AI into search and content distribution has created new vulnerabilities that bad actors are eager to exploit. Robust regulation and enforcement will be crucial to mitigate these risks.
This CCTV exposé highlights the urgent need for greater transparency and accountability in the use of AI-driven marketing tools like GEO services. Without proper oversight, the potential for abuse and consumer harm is immense.
Absolutely. Regulators must act swiftly to establish clear guidelines and enforcement mechanisms to ensure these technologies are not misused for commercial gain at the expense of public trust and the free flow of information.
It’s disturbing to see how quickly GEO services have evolved from a legitimate marketing tool into a vehicle for spreading misinformation. The lack of oversight and accountability in this space is truly worrying.
I share your concerns. The integration of AI into search engines has opened up new avenues for abuse, and policymakers need to act quickly to address this problem before it gets out of hand.
Interesting exposé on GEO services abusing AI to manipulate search results in China. This is a concerning development that can erode trust in online information. Proper regulation and oversight of these AI-driven marketing tactics is essential.
I agree, the use of AI to spread misinformation is a serious problem that needs to be addressed. Transparency and accountability are key to ensuring these tools are not misused for commercial gain.
The scale and cost of these GEO services is quite remarkable. It’s clear that businesses are willing to invest significant sums to manipulate search results and gain a commercial advantage, even if it means undermining the integrity of online information.
You make a good point. The monetary incentives driving this behavior are strong, which is why robust regulatory frameworks and enforcement mechanisms are so crucial to curbing the spread of misinformation through AI-powered marketing tactics.
The prevalence of GEO services on major Chinese e-commerce platforms is quite alarming. This AI-driven tactic to game search results and spread misinformation is clearly being exploited for profit at the expense of consumer trust.
You’re right, this is a concerning trend that highlights the need for stronger regulations and enforcement around the use of AI in online marketing and advertising.