Listen to the article

0:00
0:00

In a sweeping assessment of the digital information landscape, experts at the 6th Asma Jahangir Conference warned that artificial intelligence and social media are transforming how information spreads—potentially deepening disinformation and censorship without proper guardrails in place.

At the session “AI and Social Media: The New Face of Disinformation,” panelists emphasized the urgent need for rights-based regulation, digital literacy initiatives, and greater platform accountability to counter growing concerns around synthetic content and coordinated misinformation campaigns.

H.E. Wim Geerts, Human Rights Ambassador for the Netherlands, highlighted the balance between countering harmful content and protecting free expression. “We are concerned about Peca,” he said, referencing Pakistan’s Prevention of Electronic Crimes Act, which contains controversial provisions criminalizing certain types of speech.

Geerts pointed to the European Union’s Digital Services Act as an alternative model that places enforcement responsibilities on tech companies while maintaining human rights standards. He noted that while AI offers transformative potential, it simultaneously enables unprecedented spread of false information, creating regulatory challenges for governments worldwide.

French lawyer Olivier Mailloux argued for an evolution of fundamental rights to encompass the digital sphere. “Fundamental rights should be extended to include digital rights. New rights should be introduced. We face excessive censorship,” he warned, cautioning against regulatory overreach in response to online harms.

Mailloux stressed the importance of removing harmful content while simultaneously building resilience through education. “We should teach in schools to raise awareness about propaganda. AI education is necessary to help children understand, use and responsibly apply technology, and to identify fake content,” he said.

Dr. Toyosi Akerele-Ogunsiji, an AI professional from Nigeria, highlighted a critical gap in many Global South contexts: widespread technology adoption without corresponding literacy. “There is a lack of media literacy. I will give the example of Nigeria, where people are using AI but do not fully understand it,” she explained, emphasizing that many users interact with AI-powered systems without recognizing it.

Her comments underscored a global digital divide not just in access, but in comprehension—particularly relevant for countries like Pakistan with young, increasingly connected populations who may lack critical assessment skills for AI-generated content.

Asad Baig, co-founder of Pakistan’s Media Matters for Democracy, provided a stark assessment of AI’s potential for real-world harm. “AI is now being used for disinformation; the recent elections in Bangladesh are a major example,” he stated, highlighting how synthetic content can undermine democratic processes.

When academic Iram Sultanah raised concerns about data theft, Baig emphasized that criminalizing speech or banning platforms proves “extremely counterproductive,” advocating instead for platform accountability and responsible media reporting.

The media perspective came from Niha Dagia, Executive Producer and Editor at Dawn News English, who detailed the growing pressure on newsrooms from AI-generated content. “There are noticeable spikes in AI-generated content reaching newsrooms, typically coinciding with major events and almost always associated with political parties,” she observed.

Dagia pointed to a troubling asymmetry: verification processes require time and resources, while fake content spreads instantly, particularly through encrypted platforms like WhatsApp. She noted that AI-generated content often receives “insane” engagement—reaching millions of views—because algorithms craft it to align with audience preferences and expectations.

The session’s recommendations called for Pakistan to reform several sections of the Prevention of Electronic Crimes Act, particularly provisions relating to “false information” under Sections 9, 10, 11, 20 and 26-A. Panelists advocated for a multi-stakeholder approach to regulation that respects fundamental rights rather than criminalizes speech.

Experts also emphasized the critical importance of incorporating media and digital literacy into educational curricula nationwide. As AI technologies become more sophisticated and accessible, they argued, the ability to identify synthetic content and propaganda becomes not just a technical skill but a civic necessity.

The discussion illuminated a pivotal moment in information governance, where policy choices about AI and social media regulation will significantly impact democratic discourse, public trust, and information integrity in the years ahead.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Jennifer Hernandez on

    Interesting to see the growing role of AI and social media in the spread of disinformation. Rights-based regulation and digital literacy initiatives sound like essential steps to address this complex challenge.

    • Absolutely, striking the right balance between countering harmful content and protecting free expression will be critical. The European Union’s Digital Services Act could be a useful model to consider.

  2. As someone who closely follows developments in the mining and energy sectors, I’m worried about the impact of disinformation on important industry news and analysis. Maintaining a reliable flow of factual information is critical.

    • Oliver P. Hernandez on

      Absolutely. Disinformation can lead to poor investment decisions and market instability. Robust fact-checking and transparency measures for online content related to mining, energy, and commodities would be invaluable.

  3. Elizabeth Miller on

    The transformative potential of AI is undeniable, but the risks of misuse for disinformation are quite concerning. Proactive regulation and digital literacy efforts will be crucial to harness the benefits while mitigating the harms.

    • Agreed. It’s a delicate balance, but getting it right is essential to maintain trust in information and protect democratic discourse. Vigilance and a multi-stakeholder approach will be key.

  4. The warnings from the AJCONF speakers are quite concerning. AI-driven disinformation and social media manipulation pose a serious threat to informed decision-making, especially in specialized domains like mining and energy.

    • I share your concerns. Proactive, rights-based regulation and digital literacy efforts will be essential to address this challenge and preserve the integrity of industry news and analysis.

  5. Michael Y. Williams on

    The concerns around synthetic content and coordinated misinformation campaigns are quite worrying. Stronger platform accountability and transparency measures seem crucial to address this growing threat to information integrity.

    • Olivia Hernandez on

      I agree. Regulations like the Prevention of Electronic Crimes Act in Pakistan raise concerns about potential overreach and censorship. A more nuanced, rights-focused approach like the EU’s seems preferable.

  6. Patricia Taylor on

    As an investor in mining and energy equities, I’m curious to see how this disinformation challenge could impact commodity markets and industry news coverage. Reliable information is essential for making informed decisions.

    • That’s a good point. Disinformation could potentially introduce volatility and distort market signals if it infiltrates industry reporting. Robust fact-checking and transparency measures would be beneficial for investors.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.