Listen to the article
Disinformation Growing More Sophisticated as AI Amplifies Threats to Public Trust
Disinformation is evolving at an alarming pace, becoming increasingly sophisticated, emotion-driven and difficult to contain, experts warned during a recent conference in Malta. The rapid advancement of artificial intelligence tools and algorithmic amplification present growing risks to democratic processes and public trust.
The warnings emerged during the “Identifying Misinformation” conference organized by the European Parliament Liaison Office in Malta, in collaboration with the 3CL Foundation, University of Malta, and MCAST. The event brought together academics, journalists, media specialists and Members of the European Parliament to examine how false narratives are reshaping public discourse.
Participants highlighted that today’s misleading content has moved far beyond crude fake stories or anonymous online rumors. Modern disinformation now encompasses AI-generated images, cloned voices, manipulated videos, and emotionally charged narratives specifically designed to provoke anger, fear or outrage.
Recent Eurobarometer data underscored widespread concern among Maltese citizens, with 78% reporting increased worry about disinformation. Nearly four in five respondents (79%) expressed concern about fake content created through artificial intelligence, while 76% worried about hate speech and data privacy issues.
Dr. Paula Gori from the European Digital Media Observatory emphasized how public discourse has migrated online, transforming the nature of civic engagement. “We are getting informed online, we are discussing online, so public space has moved online,” she explained.
Gori cautioned that many users no longer actively choose the information they consume. Instead, they rely on algorithmic systems that determine what appears in their feeds. Content triggering strong emotional reactions typically performs better, making it more visible and profitable for platforms.
“Information manipulation plays with emotion,” Gori noted, adding that even obviously false content can still influence public perception, confuse audiences, or contaminate machine-learning systems.
Professor Ġorġ Mallia observed that while society now carries vast amounts of information “in our pocket,” people increasingly struggle to distinguish credible sources from misinformation. He pointed to click-driven business models that prioritize sensationalist content, with attention becoming more valuable than accuracy.
Several speakers expressed particular concern for younger generations who increasingly rely on social media for news, growing up in an environment where false and factual information often appear side by side without clear distinction.
Times of Malta editor Herman Grech described professional journalism as fighting a two-front battle: reporting facts while also struggling to remain visible in an online ecosystem dominated by constant distraction. “It has become a struggle not to try and report the facts, but to try and keep audiences engaged,” he said.
Grech warned that the continued weakening of trusted media outlets could leave societies more vulnerable to propaganda, coordinated influence campaigns and politically motivated falsehoods.
The conference also examined the European Union’s response to these challenges. Dr. Mario Sammut, Head of the European Parliament Office in Malta, outlined the EU’s multi-pronged approach.
“The European Union is acting on several fronts through stronger legislation, closer monitoring and greater awareness,” Sammut explained. “Citizens must have more control over what they see online and greater transparency on why certain content is being shown to them, which is exactly what measures such as the Digital Services Act are designed to achieve.”
Sammut emphasized that alongside regulatory measures, the EU is supporting independent journalism, fact-checkers and media literacy initiatives to help citizens make informed decisions.
MEP Alex Agius Saliba highlighted Europe’s progress in establishing rules to limit disinformation spread, but stressed that enforcement must match the ambition of the laws. “As European lawmakers we set clear rules for the Big Tech platforms to stop the spread of disinformation,” he said, adding that companies operating in the EU market must respect democratic standards and the bloc’s legal framework.
Fellow MEP Peter Agius noted that while regulation is important, tackling disinformation also requires action at the community level through education, awareness and stronger institutions. “We must fight misinformation first and foremost on the ground in Malta,” he said, calling for more initiatives promoting local content in media.
As AI technologies continue to advance, the conference highlighted the urgent need for a coordinated response involving regulators, technology platforms, media organizations and citizens to protect the integrity of public discourse in an increasingly complex information environment.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
The rapid advancements in AI capabilities are a double-edged sword. While they offer immense potential, the risks of misuse for disinformation campaigns are alarming. Vigilance and proactive measures are essential.
Well said. Striking the right balance between harnessing AI’s benefits while mitigating its misuse will be a key challenge going forward.
The warnings from experts are sobering. We must remain vigilant and proactive in addressing the evolving disinformation landscape, with a focus on building resilience and restoring public trust.
Manipulative, emotion-driven disinformation is a serious threat to public discourse. Rigorous fact-checking, media literacy, and responsible AI development must be prioritized to protect democratic institutions.
This is a complex issue without easy solutions. The growing sophistication of AI-powered disinformation highlights the urgent need for cross-sector collaboration to develop effective countermeasures.
Agreed. Combating this challenge will require ongoing innovation, vigilance, and a whole-of-society approach.
Deeply concerning how AI is being weaponized to spread disinformation and erode public trust. We need robust fact-checking and media literacy efforts to counter these growing threats to democracy.
Agree, this is a serious issue that requires a concerted, multifaceted response. Ethical AI development and deployment will be crucial.