Listen to the article
In a technological era where truth battles manipulation, artificial intelligence stands at a crossroads as both instigator and potential defender against disinformation, experts warned at a recent academic forum in Manila.
Speaking at the 24th Jaime V. Ongpin Annual Memorial Lecture at the Ateneo Professional Schools, Professor Maria Mercedes Rodrigo highlighted the dual nature of AI technologies in today’s information landscape.
“AI has accelerated and democratized the capacity to create deepfakes — synthetic text, images, audio, video of events that never took place,” said Rodrigo, who heads the Ateneo Laboratory for the Learning Sciences.
She noted the Philippines’ particular vulnerability to digital misinformation. “The Philippines has been designated patient zero in the context of global disinformation, largely attributable to the significant prevalence of misinformation during the 2016 Philippine elections,” Rodrigo explained.
The observations stem from a comprehensive study interviewing 14 experts across various sectors, including industry professionals, academics, civil society members, and media representatives. Within the Philippine business sector, AI applications currently focus predominantly on fraud detection, anti-money laundering initiatives, and personalized marketing campaigns.
Rodrigo emphasized that while the Philippines possesses “internal capacity and expertise” regarding AI technologies, the country lacks scale in its approach to governance and application. “What we seem to lack is scale, and what we need is scale,” she stated.
The study proposes several remedies to combat AI-enabled disinformation, including establishing a national AI governance framework aligned with the Department of Trade and Industry’s National AI Strategy Roadmap 2.0. Researchers also advocate for enhanced media literacy programs to educate the public about AI’s capabilities and potential misuses.
A panel of experts followed Rodrigo’s presentation, each offering perspectives on AI’s complex relationship with truth and misinformation. Lawyer Jamael Jacob, director of the university’s data protection office, expressed skepticism about using AI to combat AI-generated falsehoods.
“Critics and proponents of AI both readily admit that AI isn’t perfect, and makes mistakes or ‘hallucinates,'” Jacob said. He stressed that transparency remains crucial for establishing responsibility in AI applications, though major technology companies often resist such transparency by classifying their algorithms as trade secrets.
Dominic Ligot, founder of Cirrolytix and chair of AI ethics and safety at the Philippine AI Business Association, took a more measured position, describing AI as “neither a savior nor a villain.” He advocated for approaches targeting the amplification mechanisms of misinformation rather than policing expression itself.
“We must treat deepfakes as a present operational threat rather than a distant one,” Ligot warned, noting that paid disinformation campaigns significantly outpace factual information in reach and virality.
Gemma Mendoza, Rappler’s head of digital services, focused on platform accountability and the proliferation of synthetic content. She described how “AI slop” — particularly hyperrealistic deepfake videos — increasingly drowns out authentic content on social media platforms.
Mendoza cited a Reuters investigation revealing that platforms like Meta profit from fraudulent content, charging suspected bad actors higher rates for promotion rather than investigating potential fraud. “While innovation should be welcomed, guard rails should be in place and platforms should not be allowed to profit from content theft,” she argued.
The forum underscored a growing recognition that AI regulations must balance innovation with ethical constraints and public safety. Experts agreed that without proper governance frameworks, AI’s potential benefits could be overshadowed by its capacity to undermine information integrity and democratic processes.
The full study, authored by Rodrigo, Rommel Jude Ong, Karen Claire Garcia, Charisse Erinn Flores, and Johanna Marion Torres, was funded by the Konrad Adenauer Stiftung Foundation and is available for free public access.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Leveraging AI to counter AI-powered disinformation is an interesting concept. But it will require careful development and deployment to ensure the technology is not misused itself.
The Ateneo study’s findings underscore the need for a multifaceted approach involving various sectors. Collaboration and coordination will be crucial in developing effective counter-strategies.
The Philippines’ experience as ‘patient zero’ for global disinformation is a sobering reminder of the urgency in addressing this challenge. Lessons learned here could inform efforts in other countries.
Fascinating insights on the dual-edged nature of AI in the fight against disinformation. It’s a complex challenge that requires a nuanced approach from various stakeholders.
It’s concerning to hear about the prevalence of misinformation during the 2016 Philippine elections. Safeguarding the integrity of democratic processes should be a top priority.
Absolutely. Free and fair elections are the foundation of a healthy democracy. Tackling disinformation must be a key focus for policymakers and civil society.
The Philippines’ vulnerability to digital misinformation is a serious concern. AI-powered deepfakes can be incredibly damaging if left unchecked. Strengthening media literacy and fact-checking efforts is crucial.
I agree. Educating the public on identifying and combating online manipulation is key to building resilience against disinformation campaigns.