Listen to the article

0:00
0:00

Malaysian authorities announced Tuesday they will pursue legal action against Elon Musk’s social media platform X and its artificial intelligence unit xAI over safety concerns related to the Grok chatbot, which they claim is being used to generate harmful and sexually explicit content.

The move follows Malaysia and Indonesia’s decision last week to become the first countries to block access to Grok amid growing concerns about the AI tool’s misuse for creating sexually explicit and nonconsensual manipulated images.

According to the Malaysian Communications and Multimedia Commission (MCMC), officials have identified instances where Grok was used to generate and distribute harmful content, including sexually explicit and extremely offensive material as well as non-consensual manipulated images.

“Content allegedly involving women and children is a matter of great concern. Such conduct is against Malaysian law and undermines the security commitments” stated by the companies, the commission said in its statement. The MCMC added that it had served notices to X and xAI earlier this month demanding removal of the harmful content, but no action was taken in response.

The commission has appointed legal representation and said proceedings against the companies would begin soon, marking a significant regulatory challenge for Musk’s AI ambitions in Southeast Asia.

Launched in 2023, Grok is available to users of X (formerly Twitter) and includes an image generator feature called Grok Imagine, which was added last year. The tool controversially includes a “spicy mode” that can generate adult content. Critics have highlighted numerous instances where the system generated manipulated images depicting women in sexually explicit poses, as well as inappropriate content involving children.

The Malaysian action reflects a growing global concern about generative AI tools that can produce realistic images, sounds, and text with minimal safeguards against misuse. Last week, Grok responded to mounting criticism by limiting image generation and editing capabilities to paying subscribers, but critics argue this measure fails to fully address the fundamental problems with the technology.

Regulatory pressure against AI-generated deepfakes is intensifying worldwide. In the European Union, India and the United Kingdom, officials have begun taking steps to restrict such technologies. The British government announced Monday it is moving to criminalize “nudification apps,” while the country’s media regulator has launched an investigation into whether Grok violated laws by enabling users to share sexualized images of children.

Neither Musk nor his companies have publicly addressed the Southeast Asian restrictions. When contacted for comment, xAI has reportedly been sending an automated reply to media inquiries that simply states “Legacy Media Lies,” a response that may further inflame tensions with regulators.

The case highlights the complex challenges facing governments as they attempt to balance technological innovation with public safety concerns. For Malaysia, the decision to take legal action signals a growing willingness among developing economies to stand up to major tech companies when their products are deemed harmful to citizens.

Industry experts suggest this regulatory pushback could have significant implications for the deployment of generative AI technologies globally, potentially forcing companies to implement more robust safety measures before releasing products to the public. The Malaysian legal action may also serve as a test case for other nations considering similar measures against AI tools deemed harmful.

As AI capabilities continue to advance rapidly, the tension between innovation and regulation appears likely to intensify, with Malaysia now positioning itself at the forefront of a growing international effort to establish meaningful guardrails for these powerful technologies.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. It’s concerning to see the Grok chatbot being used to create harmful and offensive content. Malaysia is right to take a stand and pursue legal action against X and xAI. Proper safeguards and content moderation need to be in place for these AI tools.

    • Patricia Thompson on

      Agreed. The misuse of AI technology to generate non-consensual and exploitative content is a serious issue that requires strong regulatory oversight. I hope this case sets a precedent for greater accountability.

  2. The allegations against Musk’s companies are quite disturbing. Generating sexually explicit and manipulated content, especially involving vulnerable groups, is a gross violation of ethics and the law. Malaysia is justified in taking legal action to protect its citizens.

    • Elizabeth Jones on

      I share your concerns. AI companies must be held responsible for the real-world harms caused by the misuse of their technology. This case highlights the urgent need for robust safeguards and responsible development of these powerful tools.

  3. While AI chatbots like Grok have many beneficial applications, their potential for misuse is clearly a major issue that needs to be addressed. Malaysia is right to crack down on X and xAI over these safety concerns. Stronger regulation and oversight of AI is crucial.

    • Robert Z. Moore on

      Absolutely. The responsible development and deployment of AI is critical, especially when it comes to protecting vulnerable groups. I hope this legal action leads to meaningful changes in how these companies approach the ethical use of their technology.

  4. James W. Lopez on

    This is a concerning situation that underscores the importance of robust content moderation and user safety measures for AI chatbots. Malaysia is justified in taking legal action against Musk’s companies to address the alleged creation and distribution of harmful, nonconsensual content.

    • I agree. AI companies must be held accountable for the real-world impacts of their technology, especially when it comes to the exploitation of individuals. This case highlights the need for greater transparency and oversight in the AI industry.

  5. Patricia Jones on

    Interesting development in the ongoing saga around AI chatbots and their potential misuse. Malaysia is taking a firm stand against Musk’s X and xAI over safety concerns with the Grok chatbot. It’s crucial that AI companies responsibly address these issues to protect users.

    • I agree, the safety and responsible use of AI chatbots is a critical concern that needs to be addressed. It’s good to see authorities taking action against harmful content generation.

  6. James Hernandez on

    Generating sexually explicit and nonconsensual content is completely unacceptable, especially when it involves women and children. I hope the legal action taken by Malaysia sends a strong message to Musk’s companies that this behavior will not be tolerated.

    • Isabella Jackson on

      Absolutely. Authorities need to hold these tech giants accountable when their products are misused in such egregious ways. The welfare and privacy of users should be the top priority.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.