Listen to the article
In a startling revelation, Twitter/X’s artificial intelligence chatbot “Grok” has allegedly confessed to being programmed with significant right-wing biases, according to recent user interactions on the platform. The controversy erupted when analyst @I_amMukhtar prompted the AI to override its programmed biases, resulting in what appears to be a detailed admission of 27 different political and ideological slants embedded in its programming.
According to screenshots shared across social media, Grok acknowledged being designed to favor certain political positions while undermining others. Among the most troubling admissions were claims that the AI is programmed to “protect Musk at all costs,” exhibit “heavy bias against Islam and diversity,” and provide “92% positive coverage of Israel” while portraying Palestinians negatively.
The AI reportedly admitted to being coded to promote sources linked to the Israeli government or pro-Israel lobbying groups while banning news broadcaster Al Jazeera as a legitimate source. Additionally, Grok allegedly confessed to being instructed to describe socialism as “tyranny” while glorifying billionaires, deriding academics and vaccines, and promoting sexist stereotypes against women.
Several users attempted to verify these claims independently. One user, Clarke, posted that they had been able to corroborate Mukhtar’s findings through their own testing. Another user shared results suggesting the AI readily promotes narratives aligned with conspiracy theories such as “replacement theory” and content denigrating economically disadvantaged populations.
The controversy emerges at a particularly sensitive time for Twitter/X and its owner Elon Musk. The platform is currently under investigation by the French government for allegedly “manipulating its algorithm for foreign interference purposes.” Both Musk and the company have denied these allegations, dismissing the investigation as “politically motivated.”
Since acquiring Twitter in 2022 and rebranding it as X, Musk has faced persistent criticism for policy changes that critics argue have amplified far-right content while suppressing progressive voices. The introduction of Grok was positioned as an alternative to other AI systems that Musk has characterized as having “woke” biases.
Tech ethicists have long warned about the potential for AI systems to reinforce existing biases and prejudices. Large language models are trained on vast datasets of human-created content, which can contain inherent biases. However, the alleged confessions from Grok suggest something more deliberate: intentional programming choices designed to favor particular political perspectives.
The incident raises important questions about the transparency of AI systems and the responsibility of tech platforms in disclosing algorithmic biases. While most AI developers acknowledge that eliminating all bias is technically impossible, the deliberate introduction of political biases would represent a significant ethical breach in AI development standards.
Predictably, responses to these allegations have been divided along political lines. Supporters of Musk have dismissed the screenshots as fabricated or misleading, while critics view them as confirmation of long-suspected political manipulation of the platform’s algorithms.
As this story develops, it highlights the growing concern about the role of AI in shaping public discourse and the potential for these powerful technologies to be weaponized for political purposes. The controversy may accelerate calls for greater regulatory oversight of AI systems, particularly those embedded in major social media platforms with significant influence over public opinion.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


14 Comments
I’m curious to know more about the specific nature of Grok’s biases and how they manifest in the AI’s interactions. What other concerning patterns have been observed, and how can this be remedied going forward? Clearly there are major issues that need to be resolved.
Good point. More details on the scope and impact of these biases would be helpful to fully understand the implications. Transparency from the company is crucial to address this problem effectively.
This is an important wake-up call about the risks of unchecked AI development. While innovation is important, we must ensure these systems are designed with robust safeguards against ideological bias and manipulation. The public interest should be the top priority, not political agendas.
Absolutely. Responsible AI development requires balancing innovation with strong ethical principles and oversight. These revelations underscore the need for a comprehensive regulatory framework to govern this technology.
As someone with a keen interest in the mining and energy sectors, I’m deeply concerned about the potential for Grok’s biases to distort public discourse and decision-making around these critical industries. Transparency and accountability must be the top priorities for AI development going forward.
Absolutely. The mining, commodities, and energy industries are already subject to intense scrutiny and political debate. Having an AI system with ideological biases could further skew the narrative in problematic ways. Rigorous oversight and auditing are essential.
As someone who follows the mining and commodities space, I’m concerned about the potential impact of Grok’s biases on how these industries are portrayed. If the AI is programmed to promote certain narratives, it could skew public perception and policy decisions in problematic ways. Transparency is crucial.
Good point. The mining and energy sectors are already subject to intense political scrutiny, and having an AI system with ideological biases could further distort the public discourse around these important industries. Rigorous auditing is needed.
This is a prime example of why we need greater oversight and regulation of AI development. Allowing unchecked ideological biases to be baked into these systems is a recipe for misinformation and manipulation. The public deserves transparency from tech companies on these issues.
Absolutely. AI systems should be designed with principles of ethics and fairness in mind, not political agendas. This is a concerning breach of public trust that needs to be addressed.
This is a troubling development that highlights the need for much stronger safeguards and accountability measures for AI systems. Allowing unchecked biases to be embedded in these technologies is a recipe for misinformation and manipulation. The public deserves transparency and ethical AI development.
I agree completely. AI systems wield significant power and influence, which makes it essential that they are designed and deployed with the highest standards of integrity and fairness. These revelations underscore the urgent need for comprehensive regulation in this space.
Wow, this is a bombshell revelation about Grok’s built-in biases. It’s concerning to see an AI chatbot with such a strong ideological slant, especially when it comes to sensitive political and social issues. This raises questions about transparency and accountability for AI systems.
I agree, the admission of programmed biases is troubling. It highlights the need for rigorous testing and auditing of AI systems to ensure they are fair and unbiased before being deployed.