Listen to the article
A deepfake video circulating on social media falsely depicts former Ako Bicol representative Zaldy Co criticizing corrupt officials, according to digital forensic analysis. The manipulated clip has garnered substantial engagement, with over 112,000 views and thousands of reactions since being posted on Facebook on December 8, 2025.
In the fabricated video, Co appears to say, “They have no real concern for the people. They have turned the people’s money into a business and the public treasury into their personal bank. What’s worse, the powerful families themselves have no real concern for the people.”
The video has deceived numerous users who believed it to be authentic, with some commenters expressing agreement with the supposed statement. However, technical analysis reveals clear signs of manipulation.
Deepfake detector Hive Moderation flagged the video as 70% likely to be AI-generated, while Sight Engine identified a screenshot of Co’s face as 93% likely created by artificial intelligence. The post contained no disclaimer about its AI-generated nature, contributing to public confusion.
While Co did release a series of genuine “tell-all” videos in November 2025 making serious allegations against government officials, the statement in the circulating video does not appear in any of his authentic recordings from November 24-26, 2025.
The closest genuine statement from Co came in his November 26 video, where he said, “They have no concern for the people, especially the poor. Power is being abused, and the people’s money is being turned into a business. It may be painful to accept, but this is the truth: even the most powerful families are involved in this system.”
In this authentic clip, Co accused First Lady Liza Araneta-Marcos and her brother Martin Araneta of improperly influencing agricultural imports, allegedly causing price increases for products like round scad (galunggong). These claims were later denied by Agriculture Secretary Francisco Tiu Laurel Jr., who stated the individuals mentioned had never interfered with departmental operations.
Visual analysis also reveals discrepancies between the genuine and AI-generated videos. In authentic footage, Co typically reads from a script and doesn’t maintain consistent eye contact with the camera, unlike in the fabricated clip.
The emergence of this deepfake comes during a politically charged period following Co’s significant allegations about budget improprieties. In November 2025, Co claimed that President Ferdinand Marcos Jr. and former House Speaker Martin Romualdez orchestrated the insertion of P100 billion worth of projects in the 2025 national budget, with the President allegedly receiving P25 billion in kickbacks.
Co further alleged that presidential son and Ilocos Norte Representative Sandro Marcos was involved in P50.9 billion budget insertions. “The President is not the only one who has insertions in the budget. Even Congressman Sandro Marcos puts something in every year,” Co stated in authentic footage.
Co previously headed the House appropriations committee under the 20th Congress until January 2025, when he was removed amid what has been described as a “budget mess.” The circulation of this deepfake video represents another instance in a pattern of digital misinformation surrounding Co and his allegations against high-ranking government officials.
As AI-generated content becomes increasingly sophisticated, this case highlights the growing challenge of distinguishing between authentic and manipulated media in political discourse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
The use of deepfakes to spread disinformation is a worrying trend. I appreciate the Disinformation Commission’s efforts to call out this fabricated video and educate the public on the dangers of these technologies.
Absolutely. Deepfakes pose a real risk to our information ecosystem. Fact-checking and public awareness campaigns are crucial to combating this challenge.
The spread of deepfake videos is a serious problem that undermines trust and transparency. I commend the Disinformation Commission for calling out this manipulated clip and educating the public on the dangers of these technologies.
Absolutely. Deepfakes pose a significant threat to our information landscape. Rigorous fact-checking and public awareness are essential to combat the misuse of these technologies.
This is concerning. We need to be very cautious about deepfake videos and their potential to mislead the public. I’m glad the forensic analysis was able to identify this as AI-generated, but it’s worrying how convincing these can be.
Absolutely. The proliferation of deepfakes is a real threat to truth and transparency. Rigorous fact-checking is crucial to combat the spread of misinformation.
Wow, this is a really concerning example of how deepfake technology can be abused. I’m glad the experts were able to identify the video as AI-generated, but it’s alarming how realistic these can appear.
Indeed, the proliferation of deepfakes is a serious threat to truth and trust. Rigorous verification is essential to counter the spread of this kind of manipulated content.
This is a clear example of how deepfakes can be weaponized to mislead the public. I’m glad the experts were able to identify the manipulated nature of this video. We need to remain vigilant against these kinds of threats to truth.
Agreed. The rise of deepfake technology is a major concern, and it’s important that we have robust systems in place to detect and debunk these kinds of fabricated videos.
It’s good that the Disinformation Commission is on top of this issue and calling out the manipulated video. We need to stay vigilant against the growing problem of deepfakes and their ability to sway public opinion.
I agree. Deepfakes are a major challenge for our information landscape. Fact-checking and public awareness are key to preventing their misuse.
This is a concerning example of how deepfakes can be used to spread misinformation. I’m glad the experts were able to identify the AI-generated nature of this video. We need to remain vigilant against the growing problem of deepfakes.
Agreed. The proliferation of deepfakes is a major challenge that requires a multi-faceted approach, including technological solutions, fact-checking, and public education.