Listen to the article
Researchers Launch Project to Combat Fake News with AI and Watermark Technology
Social media has become a breeding ground for fake news and disinformation, with recent technological advances making the problem increasingly difficult to address. Advanced photo and video editing tools, particularly AI-powered deepfakes that combine and superimpose various media elements to create convincing fake footage, have made identifying authentic content more challenging than ever before.
In response to this growing threat, researchers from the Universitat Oberta de Catalunya (UOC) in Spain have initiated an international project aimed at developing new technology to help users differentiate between original and manipulated multimedia content. The project, called DISSIMILAR, brings together experts from the UOC’s K-riptography and Information Security for Open Networks (KISON) and Communication Networks & Social Change (CNSC) research groups.
“The project has two objectives: firstly, to provide content creators with tools to watermark their creations, thus making any modification easily detectable; and secondly, to offer social media users tools based on latest-generation signal processing and machine learning methods to detect fake digital content,” explained Professor David Megías, KISON lead researcher and director of the Internet Interdisciplinary Institute (IN3) at UOC.
The international collaboration includes researchers from the Warsaw University of Technology in Poland and Okayama University in Japan, emphasizing the global nature of the disinformation challenge. Unlike many technology initiatives, DISSIMILAR plans to incorporate cultural dimensions and user perspectives throughout all stages of development, from initial design to usability testing.
Currently, fake news detection tools fall into two main categories. The first involves automatic detection through machine learning, though only a few prototypes exist. The second relies on human verification systems implemented by platforms like Facebook and Twitter, where people determine content authenticity.
According to Megías, these centralized human-intervention systems risk introducing biases and potential censorship. “We believe that an objective assessment based on technological tools might be a better option, provided that users have the last word on deciding, on the basis of a pre-evaluation, whether they can trust certain content or not,” he said.
The project takes a multi-faceted approach, recognizing that no single technology can effectively combat all forms of fake news. “That’s why we’ve opted to explore the concealment of information (watermarks), digital content forensics analysis techniques (to a great extent based on signal processing) and, it goes without saying, machine learning,” Megías noted.
Digital watermarking represents a crucial component of their approach. These techniques embed imperceptible information within original files to enable automatic verification of multimedia content. This technology can serve multiple purposes: confirming that content comes from an official news agency, functioning as an authentication mark that disappears if content is modified, or tracing information back to its source to identify accounts spreading fake content.
The researchers will complement watermarking with digital forensics analysis techniques that leverage signal processing technology to detect intrinsic distortions created during the production or modification of audiovisual files. These alterations, which include sensor noise and optical distortion, can be detected through machine learning models, creating a more robust verification system.
“The idea is that the combination of all these tools improves outcomes when compared with the use of single solutions,” Megías emphasized.
One distinctive aspect of DISSIMILAR is its holistic approach that incorporates user perceptions and cultural components related to fake news. The project will conduct user studies across Spain, Poland, and Japan, acknowledging that approaches to media literacy and fake news detection may vary significantly across cultures.
As deepfakes and other forms of manipulated media become more sophisticated, this research could provide critical tools to help social media users navigate an increasingly complex information landscape, potentially reducing the viral spread of misleading content that has significant social and political consequences.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools
7 Comments
Watermarking content seems like a logical solution, but I wonder how effective it will be against increasingly advanced deepfake technology. Will be interesting to see the project’s findings.
That’s a fair point. The arms race between forgery and detection is an ongoing challenge. Rigorous testing and continuous adaptation will be key.
As someone who relies on online news and information, I’m glad to see efforts to combat misinformation. Hope this research leads to practical tools for verifying media authenticity.
This is an important initiative to combat the growing threat of fake news videos. Leveraging AI and watermarking tech could be a game-changer in verifying authenticity of online content.
Curious to learn more about the DISSIMILAR project and its specific methods for detecting manipulated multimedia. Sounds like a promising step towards a more transparent digital landscape.
Glad to see researchers tackling this critical issue. Deepfakes are becoming increasingly sophisticated, so developing robust detection methods is crucial for preserving truth and trust online.
Agreed. Empowering both creators and consumers with verification tools is a smart approach.