Listen to the article

0:00
0:00

Pakistan’s AI-Led Misinformation Reaching Critical Levels, Study Finds

The Centre of Excellence in Journalism (CEJ) at the Institute of Business Administration (IBA) has issued a stark warning about the escalating threat of AI-driven misinformation in Pakistan. According to a comprehensive report released Monday, 2025 marked a watershed moment in the proliferation of sophisticated deepfakes and misleading content across the country’s information ecosystem.

The two-year study, conducted between December 2023 and November 2025, analyzed 1,026 potentially false claims, with 513 subjected to rigorous verification. The investigation examined misinformation across several domains including politics, religion, conflicts, and social issues, revealing disturbing patterns in how falsehoods spread and gain traction.

Azhar Abbas, Chairperson of the CEJ Advisory Board, highlighted a troubling correlation between media restrictions and misinformation during the report launch. “When mainstream media is silenced, the vacuum is quickly filled by unverified social media content, anonymous platforms, and AI-driven networks,” Abbas said, emphasizing how constraints on traditional journalism have created fertile ground for disinformation to flourish.

The report identifies politics as the most exploited arena for misinformation. False narratives were systematically deployed to undermine electoral confidence during Pakistan’s 2024 general elections, discredit political opponents, and erode public trust in state institutions.

Notably, the study traced significant misinformation to supporters of the opposition Pakistan Tehreek-e-Insaf (PTI) and Indian disinformation networks. However, it also implicated the ruling Pakistan Muslim League-Nawaz (PML-N) and government officials in propagating misleading content, suggesting the problem cuts across political divisions.

Shahzeb Jillani, CEJ Director, described the double-edged nature of artificial intelligence technology. While acknowledging AI’s beneficial applications in medicine, commerce, and agriculture, he emphasized its troubling capacity to generate and amplify misinformation at unprecedented scale and sophistication.

“The challenge we face isn’t just technological—it’s societal,” Jillani explained. “To effectively counter this growing tide of misinformation, we must expand verification initiatives through partnerships with media organizations, academic institutions, and public bodies.”

The iVerify Pakistan project, launched in partnership with the United Nations Development Programme (UNDP) ahead of the 2024 elections, has emerged as a critical bulwark against digital misinformation. The platform’s work underscores the sophisticated tactics deployed by misinformation networks, including half-truths, distorted context, deliberate mistranslations, impersonation of journalists, and increasingly sophisticated AI-generated content.

Federal Information Minister Attaullah Tarar acknowledged in video remarks that the complexity and scale of disinformation now pose existential challenges to national stability. “Misinformation cannot be addressed by government alone,” Tarar stated, expressing the government’s readiness to support credible, impartial fact-checking initiatives.

IBA Karachi Director S. Akbar Zaidi highlighted how regional tensions and civil unrest create particularly vulnerable moments for misinformation spread. Referencing developments in neighboring Iran, he emphasized the critical importance of reliable verification mechanisms during periods of heightened uncertainty.

The report identifies social media platform X (formerly Twitter) as the primary vector for misinformation amplification, while noting the invisible yet significant role of encrypted messaging services like WhatsApp in shaping public opinion. These closed networks remain largely impervious to oversight, creating blind spots in efforts to track and counter false information.

As Pakistan grapples with this evolving threat landscape, the CEJ report serves as both warning and call to action. With AI technology becoming increasingly accessible and sophisticated, the study suggests that collaborative, multi-sector approaches to verification and media literacy will be essential to preserving information integrity in Pakistan’s digital public square.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. While the report highlights the growing problem, I’m curious to know what specific steps the government and tech companies are taking to address this issue. Collaborative efforts will be key to finding effective solutions.

    • Absolutely. Strengthening digital verification tools and empowering users to critically evaluate online content should be top priorities.

  2. Jennifer Garcia on

    The findings highlight the urgent need for strengthening media regulations and promoting digital literacy. Safeguarding the integrity of information is crucial for a healthy democracy.

    • Absolutely. Empowering citizens to navigate the digital landscape and identify misinformation should be a key focus for policymakers and educators.

  3. This is a sobering reminder of the dark side of technological advancements. The proliferation of AI-powered misinformation is a global challenge that requires a multi-stakeholder approach to tackle.

    • Agreed. Policymakers, tech firms, and civil society must work together to develop comprehensive strategies to combat the spread of false narratives.

  4. Jennifer Brown on

    The report’s emphasis on the link between media restrictions and misinformation is concerning. Protecting press freedom and ensuring access to reliable information should be a top priority.

    • Elizabeth R. Taylor on

      I agree. Restoring trust in mainstream media and promoting transparency around online content algorithms are essential steps in this fight against AI-driven misinformation.

  5. This is deeply concerning. The rise of AI-driven misinformation poses a serious threat to the integrity of our digital ecosystem. Fact-checking and media literacy will be crucial to combat this challenge.

    • I agree. Strong regulations and transparency around AI systems are needed to curb the spread of false narratives and deepfakes.

  6. Lucas M. Thomas on

    This is a worrying development that underscores the need for robust digital governance frameworks. Balancing innovation and safeguarding the public interest will be a significant challenge in the years ahead.

    • Indeed. Policymakers must work closely with tech companies and civil society to develop effective solutions that protect online discourse and democracy.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.