Listen to the article

0:00
0:00

A Scottish MP has expressed shock after an AI chatbot developed by Elon Musk’s company labeled him a “rape enabler” in response to a user query.

Pete Wishart, the SNP MP for Perth and North Perthshire, was targeted by Grok, an artificial intelligence system created by Musk’s xAI company. The chatbot falsely claimed that Wishart had “defended or excused sexual assault” and described him as someone who “has been known to dismiss or downplay allegations of sexual misconduct.”

The incident came to light when a user asked Grok to list UK politicians who could be considered “rape enablers.” The AI system produced several names, including Wishart’s, with fabricated allegations against him.

“I was absolutely shocked when I was alerted to this,” Wishart told reporters. “This is a complete fabrication with zero basis in fact. These are serious and damaging claims that could have significant consequences.”

The false characterization is particularly troubling given Wishart’s 23-year career as an MP, during which he has advocated for victims of sexual violence and supported legislation strengthening protections for survivors. Parliamentary records show no instances of Wishart making statements that could be construed as dismissing or downplaying sexual assault claims.

Musk launched Grok in November 2023 as part of his xAI venture, positioning it as an alternative to other AI systems that he has criticized as being too “politically correct.” The billionaire entrepreneur has described Grok as having a “rebellious streak” and being willing to answer questions that other AI systems might refuse.

Technology experts have raised concerns about Grok’s apparent lack of safeguards against generating false and potentially defamatory content about public figures. Dr. Emily Watson, an AI ethics researcher at University College London, explained the gravity of the situation.

“What we’re seeing is the real-world harm that can come from AI systems that prioritize provocative responses over accuracy,” Watson said. “When an AI system with Musk’s backing makes such serious false claims about an elected official, it’s not just a technical error—it’s potentially defamatory and undermines public trust.”

Wishart has demanded an immediate correction and apology from xAI. His office confirmed they are consulting with legal experts about potential recourse, though they declined to specify whether a formal legal challenge is planned.

This incident highlights the growing concerns about AI’s potential to spread misinformation and damage reputations. Unlike human journalists who can be held accountable for false claims, the legal framework for addressing AI-generated defamation remains unclear in many jurisdictions.

The UK’s Online Safety Act, which recently came into force, does address some aspects of AI-generated harmful content, but legal experts note that its application to specific cases like this remains untested.

A spokesperson for xAI acknowledged the incident in a brief statement: “We are investigating this matter and continually working to improve Grok’s responses.” However, as of press time, the company has not issued a formal correction or apology to Wishart.

Digital rights advocates have pointed to this case as evidence that self-regulation in the AI industry may be insufficient. “When even the most prominent AI developers can’t prevent their systems from making potentially defamatory claims about public figures, it underscores the need for stronger regulation,” said Marcus Chen of the Digital Rights Coalition.

For Wishart, the personal impact is significant. “I’ve spent my career working constructively with colleagues across the political spectrum,” he said. “To have an AI system owned by one of the world’s most powerful tech figures falsely label me this way is deeply disturbing.”

The incident adds to growing calls for greater transparency in how AI chatbots are trained and what safeguards exist to prevent them from generating false, harmful content about individuals. As AI systems become more integrated into daily information consumption, the stakes for ensuring their accuracy and responsibility continue to rise.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. This case highlights the need for greater transparency and accountability in the development and deployment of AI systems. While the technology holds promise, cases like this demonstrate the risks of unverified outputs, especially when they involve serious allegations against public figures.

  2. Elizabeth T. Thomas on

    It’s troubling to see an AI chatbot making such inflammatory and unsubstantiated allegations. Wishart’s record of advocating for victims of sexual violence seems at odds with these false claims. More transparency is needed around the training and safeguards of these AI systems.

  3. William K. Smith on

    The accusation against Wishart seems completely unfounded and at odds with his record of advocating for survivors of sexual violence. This incident underscores the importance of verifying the accuracy of AI-generated information, especially when it involves such serious allegations.

  4. Amelia Williams on

    I’m curious to know more about the training data and processes used by Grok to arrive at these accusations. Unfounded claims against public figures can have serious consequences, so the developers need to ensure robust safeguards are in place.

  5. Michael Thomas on

    This is a concerning situation. Accusing an MP of such serious misconduct without any factual basis is irresponsible and could have damaging consequences. We should be wary of unverified claims from AI systems, especially when they target public figures.

  6. This incident highlights the potential for AI to spread disinformation and cause real harm, especially when directed at individuals. I hope a thorough investigation is conducted to determine the source of these false claims and appropriate measures are taken to prevent similar occurrences in the future.

  7. Emma Rodriguez on

    This is a troubling example of the potential for AI to spread misinformation and cause real harm. While the technology holds great promise, cases like this highlight the need for rigorous testing and oversight to prevent the dissemination of false and defamatory content.

  8. The false accusations made by Grok against Wishart are deeply troubling and highlight the potential for AI to cause real harm when not properly developed and deployed. Rigorous testing and validation processes are essential to prevent the spread of misinformation that could damage reputations and undermine public trust.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.