Listen to the article

0:00
0:00

A Punjab court has ordered Meta Platforms Inc. to immediately remove and block AI-generated content impersonating Punjab Chief Minister Bhagwant Mann, describing the material as “indecent” and “sordid” with potential to disturb public order.

Judicial Magistrate Sarveesha Sharma, who reviewed the objectionable content, concluded it was “prima facie indecent and sordid at the very least.” The court expressed particular concern over the material targeting a high-profile public official, noting that “the tendency of the material to incite public disorder cannot be absolutely ruled out with certitude.”

The order comes in response to an application filed by the State Cyber Cell in Punjab, which sought urgent blocking of the content under the Information Technology Act, 2000 and the Information Technology Rules of 2021. Law enforcement had registered an FIR under multiple sections of the Bharatiya Nyaya Sanhita and Section 67 of the IT Act after discovering vulgar AI-generated videos and images of the Chief Minister circulating online.

According to authorities, a Facebook account named “Jagman Samra,” operated from Canada, was identified as the source of the synthetic media. Punjab’s cyber monitoring cell determined the content was created with “malicious intent to mislead the public” and damage Mann’s reputation.

The court’s directive highlights growing concerns about the misuse of artificial intelligence tools to create deceptive or defamatory content targeting public figures. With increasingly sophisticated AI technology becoming more accessible, the potential for creating convincing fake videos and images presents significant challenges for authorities worldwide.

“If the rights of an individual are not protected at the earliest and the competent court were to finally conclude that the material was in fact AI-generated or defamatory, no purpose would be served as the right would become infructuous,” Magistrate Sharma observed in the order.

The judge emphasized that while freedom of expression remains a fundamental right, it cannot extend to creating or distributing synthetic, defamatory material designed to deceive the public. The court acknowledged that expert analysis would be required during the trial to conclusively determine whether the content was AI-generated.

In recent years, India has strengthened its legal framework to combat digital crimes, particularly those involving deepfakes and synthetic media. This case represents one of the more high-profile applications of these laws to protect public officials from AI-generated impersonations.

Meta, formerly known as Facebook, faces increasing pressure globally to better monitor and remove synthetic content that could cause harm or spread misinformation. The company has invested in detection tools for identifying manipulated media but continues to face challenges keeping pace with rapidly evolving AI capabilities.

In addition to ordering Meta to remove the specified content, the court directed the company to preserve all associated data and records as evidence. Meta must also remove any “identical, mirror, or derivative versions” of the content when notified by Punjab’s Cyber Crime Department. Similarly, Google has been instructed to de-index the offensive material to ensure it remains unsearchable.

The court warned both tech giants that failure to comply with these directives would result in loss of immunity protections under Section 79(3)(b) and Section 85 of the IT Act, potentially exposing them to legal liability.

This case follows several similar incidents worldwide where AI-generated content has been used to impersonate politicians and public figures, creating concerns about election integrity and public trust. Legal experts suggest this ruling could set an important precedent for how Indian courts address AI-generated defamatory content moving forward, particularly as the country approaches several important regional elections.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

15 Comments

  1. The court’s decision to order the removal of these deep fake videos is a welcome step, but the underlying issue of how to effectively combat the spread of synthetic media remains a significant challenge. Ongoing vigilance and innovation will be needed to stay ahead of bad actors.

  2. The court’s decision to order the removal of these deep fake videos is a positive step, but it also raises questions about the broader societal implications of synthetic media. As AI technology continues to evolve, we’ll need to find ways to balance innovation with robust safeguards to protect individual privacy and public trust.

  3. Isabella Thomas on

    Interesting case – the court seems right to be concerned about the potential for fake media to incite public disorder, especially when targeting public officials. Strict enforcement is needed to prevent misuse of AI and protect the integrity of information.

  4. Elizabeth Brown on

    This is a concerning development, as the spread of deep fake videos can seriously undermine public trust and create real-world consequences. I’m glad to see the courts taking this threat seriously and taking action to remove the offending content.

  5. Amelia Martinez on

    I’m curious to see if this court order will set a precedent for how other jurisdictions handle similar cases. The proliferation of deep fakes is a global issue that will require coordinated responses from governments, tech companies, and the public to effectively address.

  6. Removing these deep fake videos is the right call. Spreading false or defamatory content, especially about public figures, can have serious impacts on society. Tech companies need to do more to proactively detect and remove synthetic media that violates privacy and integrity.

  7. Elizabeth Thompson on

    Glad to see the courts taking this threat seriously. Deep fakes have the potential to cause real harm, especially when used to target public figures. Effective enforcement and regulation of this technology is crucial to maintain trust and integrity in the digital space.

  8. The court’s decision to block searches and remove the deep fake videos is a necessary step, but it highlights the broader challenge of maintaining trust and integrity in the digital age. Ongoing efforts to address the proliferation of synthetic media will be crucial for protecting public discourse.

  9. I’m curious to see how this case will influence the broader discourse around deep fakes and the need for more comprehensive regulations. As the capabilities of AI continue to advance, finding the right balance between innovation and responsible use will be a critical challenge for policymakers and technology companies alike.

  10. This case highlights the urgent need for better regulations and enforcement around synthetic media. While the court order is a start, we’ll likely see more instances of deep fakes being used maliciously unless comprehensive policies and technological safeguards are put in place.

  11. Olivia Rodriguez on

    This case highlights the importance of having robust policies and enforcement mechanisms in place to address the misuse of AI-generated content. While the court order is a positive development, continued efforts will be required to address the evolving nature of deep fake technology.

  12. Isabella R. Miller on

    Blocking searches and removing the deep fakes is a sensible step, but I wonder how effective it will be in the long run given the rapid advancements in AI and the ease of creating this type of content. More robust solutions may be needed to stay ahead of bad actors.

  13. Amelia Williams on

    I’m curious to learn more about the specific technological approaches being used to detect and remove these deep fake videos. As the capabilities of AI continue to advance, staying ahead of bad actors will require innovative solutions and close collaboration between the public and private sectors.

  14. William Hernandez on

    This case serves as a reminder of the importance of developing effective policies and enforcement mechanisms to address the misuse of AI-generated content. While the court order is a step in the right direction, ongoing vigilance and collaboration between the public and private sectors will be essential to stay ahead of bad actors.

  15. This case is a concerning example of how deep fakes can be used to target and undermine public figures. I’m glad to see the courts taking action, but I’m also curious about the long-term implications for how we navigate the complex landscape of emerging technologies and their potential for misuse.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.