Listen to the article

0:00
0:00

Indonesian Police Launch Investigation into AI-Generated Deepfakes on X Platform

Indonesia’s National Police Criminal Investigation Department (Bareskrim) has initiated investigations into multiple cases involving deepfakes and digital media manipulation created through artificial intelligence tools, officials confirmed Wednesday.

The investigation specifically targets potentially illegal content created using Grok, an AI service available on the social media platform X, formerly known as Twitter. Authorities are examining whether these manipulations constitute criminal violations under Indonesia’s electronic data laws.

“We are currently investigating in that direction. This has become a focus of our Cyber Crime Directorate,” said Brigadier General Himawan Bayu Aji, Director of Cyber Crime at Bareskrim, during a press briefing on January 7.

The police probe comes amid growing concerns about the misuse of AI tools to create deceptive or harmful content. In recent weeks, manipulated images generated using Grok have proliferated across X, with users issuing text commands that instruct the AI to alter photographs in inappropriate ways, including attempts to create non-consensual nude imagery.

“When it comes to AI, as long as it can be established that there is electronic data manipulation, it can be treated as a criminal offense,” Himawan explained. He added that several cases are currently under review, with potential criminal charges looming if investigators determine the actions violate Indonesia’s laws on electronic information and transactions.

Indonesia has one of the largest social media user bases in the world, with an estimated 170 million active users across various platforms. The country has previously shown willingness to regulate digital platforms, including temporarily blocking access to certain services that failed to comply with local regulations.

Digital rights experts note that this case represents a growing trend of nations grappling with the legal and ethical implications of generative AI technology. Indonesia’s Information and Electronic Transactions Law (UU ITE) contains provisions that could potentially apply to deepfakes, particularly those involving defamation or non-consensual intimate imagery.

The investigation into Grok highlights the challenges facing regulators worldwide as AI tools become increasingly accessible to the general public. Grok, developed by xAI, was launched by X owner Elon Musk in November 2023 as a competitor to other AI chatbots. The service was initially available only to premium subscribers of X.

This isn’t the first time Indonesia has expressed concerns about AI services on the X platform. Government officials previously warned of a possible ban on Grok if the service failed to implement adequate safeguards against misuse.

Digital rights advocates have called for a balanced approach that protects individuals from harm while avoiding overly broad restrictions that could impede legitimate technological development. They emphasize the importance of clear guidelines for AI developers and platforms regarding content moderation and safety features.

The Indonesian police investigation comes as other countries, including the United States and European Union members, are also developing frameworks to address AI-generated deepfakes and misinformation.

For social media platforms operating in Indonesia, this investigation signals increased scrutiny of AI features and potential liability for harmful content generated through their services. Companies may face pressure to implement stronger content filters and verification systems or risk regulatory consequences in one of Southeast Asia’s largest digital markets.

Authorities have not yet announced a timeline for the conclusion of their investigation or potential charges against specific individuals or entities.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. Jennifer Z. Rodriguez on

    This is a concerning development that highlights the need for robust oversight and governance frameworks around AI-powered media manipulation tools. I commend the Indonesian authorities for taking proactive steps to investigate potential criminal violations.

    • Absolutely. As these technologies continue to advance, coordinated global efforts will be essential to mitigate the risks and ensure they are used responsibly.

  2. Olivia Thomas on

    It’s good to see the Indonesian authorities taking proactive steps to combat the misuse of AI for manipulative purposes. Deepfakes can be extremely damaging, especially when used to create non-consensual content. This investigation is an important step.

    • Yes, and I hope other countries will follow suit in cracking down on these types of abuses. Coordinated global efforts may be needed to effectively address this challenge.

  3. John Martinez on

    While AI can be a powerful tool, it’s also ripe for abuse. I hope the investigation can shed light on the specific challenges posed by AI-generated deepfakes and determine appropriate legal and regulatory frameworks to address this emerging threat.

    • Absolutely. Striking the right balance between innovation and oversight will be critical as these technologies continue to evolve.

  4. Jennifer Jones on

    While AI-powered photo manipulation can have benign applications, the potential for abuse is clear. I’m curious to learn more about the specific legal frameworks the Indonesian authorities are exploring to combat this issue.

    • Elizabeth Davis on

      Agreed. Developing appropriate legal and regulatory safeguards will be critical to ensuring these technologies are not exploited for nefarious purposes.

  5. Elizabeth White on

    This is a complex issue without easy solutions. On one hand, AI and digital manipulation tools can have legitimate and valuable applications. But the risks of misuse are also substantial. Careful policymaking will be needed to address this challenge.

    • Agreed. Striking the right balance between enabling innovation and protecting the public will require nuanced, evidence-based approaches.

  6. The proliferation of AI-generated deepfakes is a troubling trend that demands a robust response from authorities. I’m glad to see the Indonesian police taking this issue seriously and launching a comprehensive investigation.

    • Yes, this is an important step. I hope the findings of this investigation can inform policy responses not just in Indonesia, but globally, to address this emerging threat.

  7. Oliver Thompson on

    This is a concerning development. Deepfakes and AI-generated media manipulation can be extremely damaging if misused. I’m glad the Indonesian authorities are taking this issue seriously and investigating potential criminal violations.

    • Patricia Hernandez on

      Agreed. With the rapid advancement of AI technology, it’s crucial that governments stay vigilant and crack down on any illegal or unethical applications.

  8. Amelia Taylor on

    The potential for AI-generated deepfakes to be used for malicious ends is quite concerning. I’m curious to see what specific legal and regulatory measures the Indonesian authorities develop in response to this investigation.

    • Ava C. Martinez on

      Me too. Establishing clear guidelines and enforcement mechanisms will be crucial to deter and punish any unlawful use of these technologies.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.