Listen to the article
In a significant legal move highlighting the growing threat of AI-generated misinformation in politics, Congress MP Shashi Tharoor has petitioned the Delhi High Court to combat what he describes as a “sophisticated and malicious” deepfake campaign targeting his public image.
Justice Mini Pushkarna has issued summons to social media platforms X (formerly Twitter) and Meta Platforms, along with the Union government, in response to Tharoor’s lawsuit. During Friday’s hearing, the court indicated it would issue an interim order to protect the MP’s personality and publicity rights.
The Thiruvananthapuram MP, who currently chairs the Parliamentary Standing Committee on External Affairs and previously served as Minister of State for External Affairs, contends that the fabricated videos directly undermine his patriotic credentials and professional reputation.
According to senior advocate Amit Sibal, who represents Tharoor, unknown entities have repeatedly misappropriated the MP’s face, voice, and distinctive mannerisms to create deceptively realistic content that falsely portrays him praising Pakistan and making other politically sensitive statements.
“I am a former external affairs minister. It matters to India’s standing as well,” Sibal emphasized during proceedings. “It is liable to be misused by foreign states.”
The legal team noted that despite mainstream media outlets like India Today publicly identifying these videos as fakes, the fabricated content continues to circulate widely, leaving many viewers with the impression that the statements are authentic.
The lawsuit traces the disinformation campaign’s origins to approximately March 2026, a particularly sensitive period when Tharoor was actively campaigning during Kerala Legislative Assembly elections. This timing suggests a deliberate attempt to influence voters and interfere with democratic processes.
The court filing characterizes the deepfakes as “unauthorized cloning and exploitation” of Tharoor’s likeness, arguing that sophisticated machine learning techniques were deployed to mimic his distinctive vocabulary and speech patterns, making the disinformation especially convincing and therefore damaging.
While some offending content had been removed following earlier police complaints and IT Rules grievances, Sibal informed the court that the material frequently reappears through new links, creating an ongoing challenge for containment. During Friday’s proceedings, Meta’s legal representative confirmed that the offending Instagram content had been made inaccessible earlier that morning.
This case highlights the emerging legal frontier of personality rights in the age of artificial intelligence. Tharoor joins a growing cohort of prominent public figures who have sought similar judicial intervention against AI impersonation.
The Delhi High Court has recently granted comparable interim relief to protect the personality rights of numerous celebrities, including actors Aishwarya Rai Bachchan, Abhishek Bachchan, Salman Khan, Sonakshi Sinha, Allu Arjun, and Vivek Oberoi. Others receiving such protections include cricketer Gautam Gambhir, Andhra Pradesh Deputy Chief Minister Pawan Kalyan, Art of Living founder Sri Sri Ravi Shankar, and various prominent journalists and digital content creators.
The prevalence of these cases reflects the rapidly evolving challenge posed by synthetic media technologies. As AI tools become increasingly accessible and sophisticated, the potential for misuse in political contexts represents a particularly concerning development for democratic institutions and processes.
The court’s forthcoming interim order is expected to establish parameters for the expedited removal of deepfakes targeting Tharoor across major digital platforms. The case may establish important precedents for how India’s legal system addresses the intersection of technology, free speech, and personal rights in an era where the line between authentic and artificial content continues to blur.
Legal experts note that such cases highlight the need for comprehensive regulatory frameworks that can address AI-generated content while balancing free expression concerns, particularly as India approaches future election cycles where such technologies could significantly impact voter perceptions and democratic outcomes.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
Deepfakes are a serious threat that demands urgent attention. I’m glad to see Tharoor taking legal action to combat this sophisticated disinformation campaign targeting his image and reputation.
The use of deepfakes for political attacks is a concerning trend that must be addressed. I commend Tharoor for taking legal action and hope the courts can provide a robust framework for combating this form of digital deception.
This case highlights the critical importance of developing effective countermeasures against deepfakes. I’m hopeful the courts will provide clear guidance on how to protect individuals’ rights and ensure the integrity of online content.
Deepfake technology has the potential to cause immense damage to public figures and democratic discourse. I applaud Tharoor for taking legal action and hope the courts can set a strong precedent in this case.
Tharoor’s lawsuit is an important step in holding social media platforms accountable for the proliferation of AI-generated misinformation. Tackling deepfakes should be a top priority for policymakers and tech companies.
Protecting political leaders’ public image and personality rights is crucial in the era of deepfakes. I’m glad to see Tharoor taking legal action to combat these malicious attempts to smear his reputation and patriotic credentials.
Agreed. Deepfakes pose a serious threat to democracy and free speech. Robust legal frameworks are needed to deter their misuse and safeguard the integrity of public discourse.
It’s alarming to see how deepfake technology can be weaponized for political attacks. I hope the courts can set a strong precedent in this case to discourage such malicious uses of AI and protect public figures’ rights.
This case underscores the need for robust safeguards and regulations around the use of deepfake technology. I hope the courts can provide clear guidance on how to address these emerging challenges to digital integrity.
This is a concerning case of AI-powered misinformation targeting a public figure. Deepfakes can be incredibly realistic and damaging, undermining trust and distorting the truth. I hope the courts can swiftly address this issue and hold the responsible parties accountable.
Tharoor’s case highlights the growing challenge of verifying online content authenticity. As AI technologies advance, the risks of misinformation and political manipulation will only increase. This is an important test case for the courts.
Tharoor’s lawsuit is a timely and necessary response to the growing threat of AI-powered misinformation. Deepfakes pose a serious challenge to democracy, and I hope this case leads to meaningful progress in addressing the issue.