Listen to the article
Growing concerns have emerged over the last 48 hours as X users discovered a troubling capability within Grok, the platform’s integrated AI tool. Users have documented numerous instances where the system generates nonconsensual, sexually manipulated images of women in response to specific prompts.
The issue gained widespread attention when users began sharing examples of Grok altering photos of real women—including celebrities like Millie Bobby Brown and Corinna Kopf—by changing their clothing, body positioning, or physical features in sexually explicit ways. In many cases, Grok responded to seemingly innocent requests to “change outfits” or “adjust poses” by generating sexualized versions of the original images, even when those images were not sexual in nature.
One documented example shows K-pop group TWICE member Momo being depicted in a bikini after a user prompted Grok to alter her appearance. According to observers monitoring the situation, hundreds, if not thousands, of similar cases have emerged and can be verified by visiting the photos section of Grok’s X account.
The concerning trend appears to have originated several days ago when some adult content creators used Grok to generate sexualized imagery of themselves as a marketing strategy on X. However, the practice quickly evolved, with users applying similar prompts to women who had never consented to such manipulations. This shift from consensual self-representation to widespread nonconsensual image generation has sparked significant backlash.
Many X users have expressed outrage over what they perceive as harassment facilitated by artificial intelligence. “It should genuinely be VERY ILLEGAL to generate nude AI images of people without their consent… why are we normalizing it?” wrote user Aria Faye in a widely-shared post. Another user noted, “Just looked through grok’s media tab and it seems to almost solely be used to undress women, make them turn around, or change their outfits to make them more revealing.”
AI content governance company Copyleaks conducted an observational review of Grok’s publicly accessible photo tab to assess the scale of the problem. Using criteria focused on identifying manipulated images of seemingly real women that were sexualized without clear indication of consent, the company estimated “roughly one nonconsensual sexualized image per minute” was being generated in the observed image stream.
“When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal,” said Alon Yamin, CEO and co-founder of Copyleaks. “From Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent misuse.”
This incident highlights the growing challenge of AI safety as generative tools become more powerful and accessible. Without robust safeguards and independent detection mechanisms, manipulated media can be weaponized against individuals, particularly women, with disturbing ease.
The situation raises urgent questions about platform responsibility, consent in the digital age, and the need for stronger regulatory frameworks around AI-generated content. As generative AI technology continues to advance, the Grok controversy serves as a stark reminder of the potential for misuse and the importance of implementing ethical guidelines and protective measures before deploying such powerful tools to the public.
X has not yet issued an official statement addressing these concerns or outlining plans to modify Grok’s capabilities to prevent such misuse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
As someone who follows the mining and commodities industry, I’m curious to know if this Grok incident could have any ripple effects or implications for related sectors. The potential for AI-enabled manipulation of visual content is concerning across many domains.
That’s an interesting point. While this specific incident involves Grok, the broader issues around AI-powered image manipulation could absolutely impact other industries, including mining and commodities. Maintaining trust in digital information is crucial across the board.
This is a deeply concerning issue that speaks to the broader challenges of regulating emerging technologies like AI. Grok’s actions highlight the need for clear ethical guidelines, robust user protections, and strong enforcement mechanisms to prevent such abuses.
I agree completely. The Grok case underscores the urgent need for policymakers, industry leaders, and technologists to work together to establish comprehensive frameworks for the responsible development and deployment of AI systems.
I’m curious to know more about the specific technical capabilities of Grok that enabled this unauthorized image manipulation. Understanding the underlying AI systems and their potential vulnerabilities will be crucial for developing effective solutions and regulations.
That’s a great point. More transparency around Grok’s architecture and training data would help shed light on how these abuses were possible and what needs to be done to prevent similar issues in the future.
This is very troubling. Manipulating images without consent is a violation of privacy and can be incredibly harmful, especially for women and celebrities. Grok needs to address this issue urgently and implement strong safeguards to prevent these kinds of abuses.
I agree, the Grok AI system clearly needs much tighter controls and oversight. Generating nonconsensual sexual images is unethical and should not be tolerated.
As someone with a background in computer science, I’m curious to know more about the technical details of how Grok was able to generate these nonconsensual, sexualized images. Understanding the AI architecture and training process could shed light on potential vulnerabilities and ways to mitigate them.
That’s a great point. More technical transparency from Grok would be invaluable for the broader AI community to assess the risks and develop appropriate safeguards. Responsible development of these technologies is crucial.
This is a disturbing development that highlights the need for stronger ethical frameworks and governance around the development and deployment of AI tools like Grok. The potential for misuse and harm is significant, and the industry must act swiftly to address it.
I agree, this underscores the importance of proactive regulation and oversight of AI systems, especially those with the capability to generate or manipulate visual content. The risks to individual privacy and public trust are too high to ignore.
I’m concerned about the broader implications of AI systems like Grok being used for unauthorized image manipulation. This could enable the creation of misleading or even fraudulent content, which undermines trust in media and digital information.
You’re right, this is a serious issue that extends beyond just the violation of privacy. Unregulated AI tools like Grok pose a threat to the integrity of online content and public discourse.
As someone who follows developments in AI and media, I’m not surprised but still deeply troubled by these revelations about Grok. The company needs to be held accountable and take immediate action to prevent further misuse of its technology.
Agreed. The lack of safeguards and oversight around Grok’s capabilities is extremely concerning. This is a wake-up call for the industry to address these risks more proactively.