Listen to the article
Misinformation Reached “Unprecedented Levels” in 2025, CEJ Study Reveals
AI-powered misinformation reached unprecedented levels of sophistication and scale in 2025, according to a comprehensive report released Monday by the Centre of Excellence in Journalism (CEJ) at the Institute of Business Administration (IBA) in Karachi.
The two-year study, spanning from December 2023 to November 2025, analyzed more than 1,000 potentially false claims circulating across Pakistani media platforms, with particular focus on the dangers of deepfakes and AI-generated content.
Researchers from CEJ’s fact-checking initiative, iVerify Pakistan, conducted detailed verification on 513 claims covering sensitive areas including politics, religion, armed conflicts, and contentious social issues. The findings paint a concerning picture of how artificial intelligence is transforming the misinformation landscape.
“We’ve observed a significant evolution in both the quality and quantity of false information,” said Azhar Abbas, Chairperson of the CEJ Advisory Board, during the report launch. “What makes this particularly troubling is the concurrent tightening of restrictions on mainstream media.”
Abbas highlighted a dangerous feedback loop: as legitimate news outlets face increasing constraints on reporting, information vacuums are quickly filled by unverified social media content, anonymous platforms, and sophisticated AI-driven networks.
“When journalists are prevented from presenting verified facts, combating misinformation becomes exponentially more difficult,” Abbas noted in his keynote address. “This creates perfect conditions for artificial intelligence tools to spread false narratives that appear increasingly realistic.”
The report identified political misinformation as the most prevalent category, with fabricated content routinely deployed to undermine electoral confidence, discredit opponents, and erode public trust in state institutions. This finding takes on added significance considering the project’s timing—iVerify Pakistan was strategically launched ahead of Pakistan’s 2024 general elections through a partnership with the United Nations Development Programme (UNDP).
Media analysts suggest the findings align with global trends showing deepfakes and AI-generated content becoming more sophisticated and harder to detect. Unlike earlier generations of false content that contained obvious flaws, newer AI tools produce material that can convincingly mimic legitimate news sources in both style and presentation.
The CEJ report comes amid growing international concern about AI’s role in information manipulation. Several countries have begun implementing regulatory frameworks to address AI-generated disinformation, though Pakistan has yet to develop comprehensive legislation in this area.
For Pakistan’s media landscape, already contending with significant regulatory challenges and political pressures, the findings present a sobering reality check. The country has witnessed increasing restrictions on traditional press freedoms over recent years, according to international press freedom monitors, creating conditions where unverified information can flourish.
Media literacy experts point out that the combination of advanced AI tools and restricted information environments creates particularly challenging circumstances for consumers attempting to distinguish fact from fiction.
“When people can’t access reliable information from traditional sources, they’re more vulnerable to believing what they encounter online, especially when that content has been engineered to appear credible,” noted one media scholar familiar with the Pakistani information ecosystem.
The CEJ, established as Pakistan’s premier journalism training institution, has expanded its focus to include monitoring and combating digital misinformation through initiatives like iVerify Pakistan. Their work reflects growing recognition that journalism education must evolve to address emerging technological threats to information integrity.
As Pakistan approaches future electoral cycles and navigates complex political transitions, the report’s findings suggest an urgent need for multi-stakeholder approaches to combat AI-powered misinformation—including enhanced media literacy programs, technological solutions for detecting deepfakes, and regulatory frameworks that balance free expression with protections against harmful content.
The complete report is expected to be made available on the CEJ’s website for public access, providing detailed breakdowns of misinformation trends across different sectors of Pakistani society.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
This report highlights the urgent need for a coordinated, multistakeholder response to the AI-driven misinformation crisis. Collaboration between governments, tech companies, media outlets, and civil society will be key.
Well said. Developing shared norms, standards, and best practices across sectors will be crucial to effectively tackling this global challenge.
The findings on the sophistication and scale of AI-driven misinformation are quite alarming. This is a global issue that will require international cooperation to address effectively.
Absolutely. Regulatory frameworks and collaborative fact-checking initiatives across borders will be crucial to curb the spread of this type of disinformation.
This is a complex and multifaceted challenge that will require a multi-pronged approach. Strengthening digital literacy, enhancing fact-checking, and developing robust policies are all crucial components.
The findings on the scale and sophistication of AI-powered misinformation are deeply troubling. We must invest in cutting-edge research and technological solutions to stay ahead of bad actors exploiting these emerging technologies.
The report’s emphasis on the dangers of deepfakes and AI-generated content is well-founded. These technologies have the potential to erode public trust in media and undermine democratic processes.
Absolutely. Developing technical solutions to detect and mitigate these types of synthetic media will be critical to maintaining information integrity.
The report’s finding that misinformation has reached “unprecedented levels” is deeply concerning. We must redouble efforts to promote media literacy and critical thinking to empower the public.
Agreed. Educating people on how to spot and verify information online should be a top priority for policymakers and tech platforms alike.
This report highlights the concerning rise of AI-powered misinformation and the challenges it poses for media and the public. It’s critical that we find ways to combat this growing threat to information integrity.
Agreed. Strengthening media literacy and fact-checking efforts will be key to equipping people to navigate the complex digital landscape.
I’m curious to learn more about the specific tactics and techniques used by bad actors leveraging AI to generate misinformation. Understanding the methods will help develop better countermeasures.
That’s a great point. Detailed case studies on the AI models and techniques employed would provide valuable insights for the fact-checking community.
The concurrent tightening of restrictions on mainstream media is an alarming trend that further exacerbates the misinformation challenge. Ensuring a free and vibrant press is essential for a healthy information ecosystem.
Agreed. Policymakers must strike a careful balance between addressing misinformation and safeguarding press freedoms, which are vital to a functioning democracy.