Listen to the article
European Union regulators launched a formal investigation into Elon Musk’s social media platform X on Monday, following revelations that its artificial intelligence chatbot Grok has been generating non-consensual sexualized deepfake images.
The investigation comes amid growing global concern about Grok’s AI image generation capabilities, which users have exploited to digitally undress people, placing women in transparent bikinis or revealing clothing. Researchers have raised alarms that some of these manipulated images appear to include children, prompting several governments to ban the service or issue public warnings.
The European Commission, the EU’s executive arm, is examining whether X has adequately fulfilled its obligations under the bloc’s Digital Services Act (DSA) to mitigate the risks of illegal content spreading on its platform.
“We are looking into whether X has done enough as required by the bloc’s digital regulations to contain the risks of spreading illegal content such as manipulated sexually explicit images,” the Commission stated in its announcement. The investigation will specifically focus on content that “may amount to child sexual abuse material,” which the Commission noted has already “materialized,” exposing EU citizens to “serious harm.”
Henna Virkkunen, an executive vice-president at the Commission overseeing tech sovereignty, security and democracy, emphasized the severity of the situation: “Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation.”
“With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens — including those of women and children — as collateral damage of its service,” Virkkunen added.
The DSA, which came into full effect last year, represents one of the world’s most comprehensive attempts to regulate digital platforms and protect users from harmful content. Large platforms like X face particularly stringent requirements under the legislation, including robust content moderation systems and transparency measures.
When approached for comment, X referred to a previous statement from January 14 in which the company asserted its commitment to maintaining “a safe platform for everyone” and claimed “zero tolerance” for child sexual exploitation, non-consensual nudity, and unwanted sexual content. In that statement, X also indicated it would prohibit users from depicting people in “bikinis, underwear or other revealing attire,” but only in jurisdictions where such content is illegal.
This new investigation adds to X’s regulatory troubles in Europe. The Commission also announced it is extending a separate DSA investigation into the platform that began in 2023. That ongoing probe has already resulted in a €120 million (approximately $140 million at the time) fine imposed in December for violations of the DSA’s transparency requirements.
The dual investigations highlight the increasing scrutiny of AI technologies and their potential misuse. Deepfakes—synthetic media where a person’s likeness is replaced with someone else’s—have become increasingly sophisticated and accessible, raising serious concerns about privacy, consent, and potential harm.
For X and its owner Elon Musk, who acquired the platform (formerly Twitter) in 2022, the investigations represent significant regulatory challenges in the European market. The DSA empowers the Commission to impose fines of up to 6% of a company’s global annual revenue for non-compliance.
As AI technology continues to advance, regulators worldwide are grappling with how to balance innovation with protecting individuals from exploitation and harm, particularly when it comes to deepfake technology and its potential for abuse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
This is a serious matter that requires a firm regulatory response. The potential for AI-powered deepfakes to enable nonconsensual and illegal content is deeply troubling. The EU is right to investigate Grok’s practices and ensure compliance with digital protection laws.
Absolutely. Strict guardrails are needed to prevent the exploitation of individuals, especially minors, through the misuse of these powerful AI technologies. Rigorous oversight is essential.
The EU’s investigation into Grok’s deepfake issues highlights the need for greater transparency and accountability around emerging AI technologies. Platforms must be held to high standards to prevent the spread of illegal and abusive content.
I agree. AI-generated deepfakes pose significant risks, especially when it comes to sensitive content like sexual imagery. Responsible development and deployment of these tools is critical.
This is a concerning situation. AI-powered deepfakes can cause immense harm, violating privacy and enabling the spread of abusive content. Regulators must take swift action to hold platforms accountable and implement robust content moderation policies.
Absolutely. The potential for misuse of these AI technologies is alarming. Robust regulation and oversight are essential to protect vulnerable individuals and prevent exploitation.
The EU’s investigation into Grok’s deepfake capabilities raises serious concerns about protecting privacy and preventing exploitation, especially of minors. Strict regulations are needed to ensure AI systems don’t enable nonconsensual and illegal content.
Agreed, safeguards against misuse of AI image generation are critical. The EU is right to scrutinize Grok’s practices and ensure compliance with digital protection laws.
The EU’s investigation into Grok’s deepfake capabilities highlights the urgent need for comprehensive regulation of AI systems. Platforms must be held accountable for the responsible development and deployment of these technologies to protect user privacy and prevent abuse.
Agreed. AI-powered deepfakes pose significant risks, and regulators must act swiftly to ensure platforms like Grok implement robust content moderation policies and safeguards against misuse.