Listen to the article
In an era where artificial intelligence can create deceptively realistic content, the battle against digital misinformation is intensifying. As we approach 2025, experts are increasingly concerned about the sophisticated tools available to those seeking to spread false information online.
Columbia Business School Professor Gita Johar has emerged as a leading voice on this issue, conducting extensive research on how misinformation spreads and what can be done to combat it. According to Johar, the fight against AI-generated falsehoods requires a coordinated approach involving publishers, platforms, and individual users.
“The technology has evolved faster than our defenses,” Johar explains. “Today’s AI systems can create text, images, audio, and video that are increasingly difficult to distinguish from authentic content, making traditional verification methods inadequate.”
Recent studies show that the proliferation of generative AI tools has led to a 300% increase in synthetic media online since 2021. These tools, once requiring technical expertise, are now accessible to anyone with an internet connection, dramatically lowering the barriers to creating convincing false content.
The consequences of this trend extend far beyond simple pranks or clickbait. Misinformation campaigns have targeted democratic processes, financial markets, and public health initiatives. During the COVID-19 pandemic, for instance, AI-generated content promoting false cures and conspiracy theories reached millions, demonstrating the real-world impact of digital falsehoods.
To address these challenges, Johar advocates for a three-pronged approach she calls the “3P Framework” – focusing on publishers, platforms, and people.
Publishers, including news organizations and content creators, must implement rigorous verification systems and clearly label AI-generated content. Several major news outlets have already established dedicated teams to authenticate digital evidence and identify synthetic media. The Associated Press and Reuters, for example, have developed protocols specifically designed to detect AI manipulations in submitted materials.
“Transparency is crucial,” Johar notes. “Publishers need to be explicit about what’s human-created, what’s AI-assisted, and what’s entirely AI-generated.”
Technology platforms face perhaps the greatest responsibility. Companies like Meta, Alphabet, and Twitter (now X) have invested billions in content moderation systems, but these efforts often lag behind increasingly sophisticated deception techniques. Johar suggests platforms need to develop more proactive approaches, including collaborative industry standards and improved detection algorithms.
“Content watermarking shows promise,” she says, referring to embedded digital signatures that can identify AI-generated material. “But we need uniform implementation across the industry to make it effective.”
Several promising initiatives are already underway. The Coalition for Content Provenance and Authenticity (C2PA), which includes Adobe, Microsoft, and other tech giants, is developing open technical standards to certify the source and history of media content. Meanwhile, startups like Truepic and Witness are creating verification tools specifically designed for journalists and human rights organizations.
The third component of Johar’s framework focuses on individual users. Digital literacy programs have shown effectiveness in helping people identify questionable content and verify information before sharing it. Research indicates that even brief training sessions can significantly improve a person’s ability to spot potential misinformation.
“We can’t rely solely on technical solutions,” Johar emphasizes. “People need to develop a healthy skepticism and verification habits when consuming online content.”
Educational institutions are increasingly incorporating media literacy into curricula, while nonprofit organizations like the News Literacy Project provide resources for educators and communities. Some countries, including Finland and Taiwan, have implemented nationwide digital literacy programs aimed at building societal resilience against misinformation.
As we move toward 2025, Johar believes the challenge will only intensify with advances in AI technology. However, she remains cautiously optimistic that coordinated efforts across her 3P framework can mitigate the worst impacts of synthetic misinformation.
“This isn’t a problem any single entity can solve,” she concludes. “It requires collaboration between technology companies, media organizations, educational institutions, and individual users. The technology that creates these problems can also help solve them, but only if we’re strategic and proactive.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


13 Comments
Interesting to see academia taking a leading role in tackling this challenge. Professor Johar’s research sounds like it could yield valuable insights. I look forward to seeing the proposed strategies take shape.
As someone working in the mining/commodities space, I’m particularly worried about how this could impact public perception and policy decisions related to our industry. Fact-based reporting will be more important than ever.
This is an issue I’ve been closely following. The proliferation of synthetic media is alarming and could have serious consequences if left unchecked. I’m curious to see what innovative solutions emerge in the coming years.
Same here. Technological advancements often come with unintended side effects, and this is a prime example. Proactive, multi-stakeholder approaches will be crucial to curbing the spread of AI-generated falsehoods.
As an advocate for responsible mining practices, I’m concerned about how this could affect public discourse around issues like environmental regulations and resource extraction. Fact-based, trustworthy information must prevail.
As a concerned citizen, I’m glad to see this issue getting the attention it deserves. The potential for AI-generated misinformation to erode public trust and sow division is deeply troubling. Robust safeguards are urgently needed.
From a commodities investor’s perspective, I worry about how this could impact the information landscape and decision-making around critical resources like minerals and energy. Maintaining a clear, factual understanding will be crucial.
Fascinating to see the rapid evolution of these generative AI tools and their potential implications. Effective counter-measures will need to stay ahead of the curve. I look forward to following the developments in this space.
This is a complex challenge that will require a multi-faceted approach. I’m curious to learn more about the specific strategies being proposed to combat AI-driven misinformation. Transparency and collaboration will be key.
The 300% increase in synthetic media since 2021 is staggering. Clearly, the current tools and methods for verifying content are not keeping pace with technological progress. Innovative solutions can’t come soon enough.
The rapid advancement of generative AI tools is certainly concerning, as they make it far too easy to create convincing yet false content. Robust verification methods will be essential going forward.
Agreed, the democratization of these tools is a troubling trend that enables bad actors to spread misinformation at scale. Maintaining public trust will require diligent fact-checking.
Tackling AI-driven misinformation is a critical challenge for the digital age. Coordinated efforts between platforms, publishers, and users will be key to staying ahead of increasingly sophisticated synthetic content.