Listen to the article
In the shadowy corners of Pakistan’s social media, a disturbing trend is emerging that threatens to blur the lines between reality and fiction. Artificial intelligence-generated deepfakes are increasingly being weaponized to target journalists, spread misinformation, and influence public opinion in what experts describe as a dangerous new frontier in the country’s information wars.
The recent case of journalist Benazir Shah highlights the alarming sophistication of these digital deceptions. On November 8, an account called ‘PakVocals’ posted a video purporting to show Shah dancing in a nightclub, accompanied by derogatory comments questioning her professional credibility. The video quickly garnered over half a million views.
For those with a trained eye, however, the forgery contained telltale signs. Frame-by-frame analysis revealed momentary glitches—a flickering skin tone and rippling face outline—that betrayed its artificial origin. Further investigation using Google Lens identified the original footage featuring Indian actress Jannat Zubair Rahmani, with only Shah’s face digitally superimposed.
The incident took a troubling turn when Shah publicly noted that the account responsible was followed by a government minister. Though ‘PakVocals’ subsequently issued an apology citing religious concerns about slander, the damaging video remained online, suggesting the true intent was reputational harm.
Just ten days later, on November 18, another account named ‘Princess Hania Shah’ posted a new deepfake of the journalist, labeling her a “traitor.” Despite the crude execution of this second attempt, it still amassed more than 180,000 views before being identified as fraudulent.
These targeted attacks against journalists represent just one facet of a broader problem. During this year’s Israel-Iran conflict, Pakistani news outlets broadcast an AI-generated video supposedly showing Israeli analysts fleeing a studio during an Iranian strike. Close inspection revealed the hallmarks of synthetic media—unnatural movements, suspiciously perfect camera work, and flat audio—yet the clip was widely presented as authentic footage.
In another disturbing example, a fabricated video circulated showing alleged abuse of a Baloch woman by Pakistani soldiers. Technical analysis exposed multiple inconsistencies: uniform name tags displaying nonsensical text like “PRMRACCH,” anatomical impossibilities such as merged hands, and an American-accented voice completely out of context with the supposed desert setting. Audio analysis even detected 0.21 seconds of complete silence at the beginning—an impossibility in a genuine outdoor recording where ambient noise would be present.
The proliferation of these sophisticated fakes comes as no surprise to media researchers. A recent BBC investigation identified Pakistan as an emerging global hub for what they term “AI Slop”—mass-produced fake content leveraging real-world settings to spread misinformation for algorithmic engagement and profit. The economic incentive is straightforward: viral content generates revenue regardless of its veracity.
The technology landscape has evolved rapidly since early 2024, when AI videos were still largely recognizable as artificial. The release of OpenAI’s Sora video generator sparked intense competition among tech giants like X (formerly Twitter) and Google, dramatically improving the quality and accessibility of synthetic media creation tools.
Social media platforms have struggled to address the challenge effectively. X’s “Community Notes” system, designed to flag misinformation, has repeatedly failed to identify and label deepfakes. Even more concerning, X’s own AI chatbot, Grok, has been unable to reliably identify AI-generated content when asked, exposing critical limitations in automated detection systems.
Some progress is being made on the technical front. Google recently announced that its Gemini chatbot can now detect invisible SynthID watermarks embedded in AI-generated images created with Google’s own tools—a partial solution that doesn’t address content created using other platforms.
For now, the burden of verification falls heavily on individuals and independent fact-checkers. While sophisticated analysis tools exist and are freely available, they require time and expertise to use effectively. The asymmetry is stark: creating convincing fakes is becoming increasingly simple, while identifying them demands vigilance and technical knowledge that most casual consumers of online content lack.
As Pakistan grapples with this new dimension of digital disinformation, the challenge extends beyond technology to questions of media literacy, platform responsibility, and the fundamental nature of truth in the digital age. Without robust safeguards and greater public awareness, the risk grows that synthetic media will further polarize discourse and undermine trust in legitimate journalism at a time when factual reporting is more crucial than ever.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
The blurring of reality and fiction through AI deepfakes is a serious challenge that requires a multifaceted response. Strengthening digital forensics, enhancing media literacy, and promoting transparency in online platforms will all be important in addressing this issue.
It’s alarming to see how sophisticated these digital forgeries have become. The ability to manipulate footage and spread false narratives is a serious threat to media integrity and public discourse. Fact-checking and media literacy will be crucial in combating this.
This case highlights the urgent need for robust regulations and ethical guidelines around the development and use of deepfake technology. Policymakers, tech companies, and the public must work together to mitigate the risks and ensure these tools are not abused.
Agreed. Responsible innovation and proactive measures are critical to staying ahead of the curve and protecting the integrity of information.
This is a concerning development. The use of AI-generated deepfakes to spread disinformation and target journalists is a worrying trend that undermines trust and transparency. We must remain vigilant and call out such tactics wherever they emerge.
This case highlights the urgent need for increased investment in digital forensics and content authentication tools. Empowering journalists, fact-checkers, and the public to identify manipulated media will be critical in the fight against disinformation.
The use of AI-generated deepfakes to target journalists and spread misinformation is a worrying trend that undermines trust and transparency. Strengthening media literacy and promoting responsible innovation in this space will be key to addressing this challenge.
As the sophistication of deepfakes continues to evolve, we must remain vigilant and develop effective countermeasures. Investing in digital forensics, content authentication tools, and media literacy education will be key to combating this emerging threat.
Absolutely. Staying ahead of the curve and empowering the public to spot manipulated content will be crucial in the fight against disinformation.
The weaponization of deepfakes is a worrying development that underscores the need for robust regulation and ethical guidelines around the use of this technology. Policymakers, tech companies, and the public must work together to mitigate the risks and protect the integrity of information.
Well said. Addressing this challenge will require a comprehensive and collaborative approach to ensure these tools are not abused for nefarious purposes.