Listen to the article

0:00
0:00

Companies Increasingly Using Bots for Smear Campaigns Against Competitors

A growing number of unscrupulous businesses are deploying sophisticated bot networks and fake social media accounts to launch damaging smear campaigns against their competitors, according to industry experts monitoring online disinformation.

Lyric Jain, chief executive of UK-based monitoring firm Logically, warns that corporate disinformation tactics once associated primarily with nation-state actors are now being adopted by companies looking to gain competitive advantages. “We seem to be on the cusp of an era of disinformation against competitors,” Jain says.

Founded in 2017, Logically uses artificial intelligence to scan millions of social media posts daily across platforms like Twitter, Facebook, Instagram, and TikTok. While the firm’s main clients are the British, American, and Indian governments, Jain reports increasing interest from major global retail brands seeking protection from coordinated digital attacks.

“We are seeing that some of the same practices deployed by nation-state actors like Russia and China in social media influence operations are now being adopted by some more unscrupulous competitors of major Fortune 500 and FTSE 100 companies,” Jain explains. “[Attackers] are trying to use similar tactics to essentially go to war against them on social media.”

The primary tactics involve using networks of fake accounts to amplify and spread negative product reviews—both genuine and fabricated—while also exaggerating competitors’ financial or operational problems. For instance, if a retailer reports disappointing quarterly results, rivals might orchestrate campaigns to magnify and distort these struggles.

Jain suggests these attacks are often initiated by “foreign competitors,” particularly Chinese firms targeting Western brands. However, he acknowledges that the problem extends beyond international competition, with smaller domestic companies employing similar tactics against larger established players. “It is usually an emerging company that goes after an incumbent using these means,” he notes, though adding he “wouldn’t be surprised if some established [Western] brands are also employing these tactics.”

Logically’s approach combines AI-driven monitoring with human verification. The company’s technology identifies suspicious content from among 20 million daily social media posts, which is then reviewed by fact-checkers and experts among its 175 employees across the UK, US, and India. When disinformation is confirmed, Logically contacts the relevant platforms to have it addressed.

Response times vary by platform, but corporate-targeted disinformation is typically removed within two hours, according to Jain. This contrasts with more urgent content like violent threats, which platforms remove within minutes.

UK competitor Factmata takes a different approach. Founded in 2016, the company employs 19 different algorithms to identify problematic content while screening out “false positives” such as satire or legitimate criticism. CEO Antony Cousins emphasizes Factmata’s preference for minimizing human involvement in the verification process: “Our true aim is not to put any humans in the middle of the AI and the results, or else we risk applying our own biases to the findings.”

Rather than simply flagging individual posts, Factmata’s technology attempts to identify the original source accounts that initiated disinformation campaigns. Cousins argues that more brands need to recognize the growing reputational risks they face from social media disinformation. “If a brand is falsely accused of racism or sexism, it can really damage it. People, especially Generation Z, can choose to not buy from it.”

Professor Sandra Wachter, a senior research fellow in AI at Oxford University, cautions that using technology to combat online falsehoods presents significant challenges. “AI can be a feasible solution if we have agreement over what constitutes fake information that deserves removal from the web. Unfortunately, we could not be further away from finding alignment on this,” she says.

Wachter points to the inherent difficulties in distinguishing between falsehoods, opinions, and humor. “Human language has many subtleties and nuances that algorithms—and in many cases humans—might not be able to detect,” she explains, noting that both AI and humans can struggle to recognize sarcasm and satire.

Cousins emphasizes that Factmata doesn’t position itself as an arbiter of truth. “Our role is not to decide what is true or false, but to identify the content we think could be fake, or could be harmful, to a degree of certainty.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Patricia Martin on

    This is a concerning trend. Businesses leveraging bots and fake accounts to attack competitors is a dangerous escalation of corporate rivalries. I hope regulators can step in to curb these deceptive tactics before they cause more harm.

    • Isabella Lopez on

      Agreed, this kind of corporate disinformation is worrying. Regulators need to act quickly to establish clear boundaries and enforce penalties for these manipulative practices.

  2. While competition is healthy, these aggressive social media smear campaigns cross an ethical line. Businesses should focus on improving their own products and services rather than trying to undermine rivals through underhanded tactics.

    • Absolutely. Healthy competition is good, but this is just dirty tricks. Regulators need to step in before it gets even more out of hand.

  3. This is a troubling development. I hope the industry can self-regulate and agree on a code of ethics to prevent these kinds of predatory actions. Consumers deserve accurate information, not coordinated disinformation campaigns.

    • Michael Thompson on

      Self-regulation would be ideal, but you’re right that clear rules and enforcement from regulators may be necessary here. The integrity of online discourse is at stake.

  4. Wow, I can’t believe companies are stooping to this level. Using bots and fake accounts to smear rivals is a new low. Regulators need to step in quickly before this gets even more out of hand.

    • Agreed, this is a very concerning trend. Corporate disinformation campaigns threaten to undermine the credibility of online discourse. Stronger oversight is urgently needed.

  5. Isabella Williams on

    I’m not surprised to hear that some unscrupulous businesses are resorting to these tactics. The competitive pressures must be intense, but that doesn’t excuse the use of deception and manipulation. Regulators should crack down on this behavior.

    • You’re right, the competitive pressures don’t justify these unethical tactics. Regulators need to send a clear message that this kind of corporate disinformation will not be tolerated.

  6. This news is deeply troubling. Businesses using bots and fake accounts to attack their rivals is a new low. It undermines the integrity of online discourse and harms consumers who deserve accurate information. Regulators must intervene to curb these deceptive practices.

    • William Y. Martin on

      Exactly. The integrity of online information is at stake here. Regulators need to step in quickly with clear rules and strong enforcement to prevent this kind of manipulative behavior from spreading further.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.