Listen to the article
In a major step to combat online misinformation, the European Commission has released detailed guidance for strengthening the Code of Practice on Disinformation, outlining specific measures digital platforms should implement to address existing shortcomings.
The guidance comes amid growing concerns about the proliferation of false information across social media and news aggregation sites, which has intensified during the COVID-19 pandemic and recent election cycles throughout Europe.
According to Commission officials, the revised framework aims to transform the current self-regulatory Code into a more robust co-regulatory instrument with clear obligations for tech companies. This move represents a significant shift in the EU’s approach to digital content regulation, placing greater responsibility on platforms to monitor and remove harmful content.
“The time of big online platforms behaving like they are ‘too big to care’ is coming to an end,” said Věra Jourová, Vice President for Values and Transparency, during the announcement. “The strengthened Code will provide users with more tools to detect and report false information, while ensuring greater transparency and accountability from platforms.”
Key recommendations in the guidance include developing stronger tools for users to recognize and report disinformation, implementing improved content moderation practices, reducing monetization opportunities for purveyors of disinformation, and increasing data access for researchers studying the phenomenon.
Tech giants including Facebook (Meta), Google, Twitter, Microsoft, and TikTok, who are signatories to the original Code established in 2018, will need to adapt their practices to comply with the enhanced requirements. The Commission has indicated that these changes align with the broader Digital Services Act, which introduces comprehensive regulation for digital platforms operating in the European Union.
Industry analysts suggest compliance could require substantial operational changes for these companies, potentially affecting their content moderation policies globally due to the challenges in implementing region-specific measures.
In parallel with the guidance release, the Commission announced the launch of national hubs under the European Digital Media Observatory (EDMO) framework. These hubs will serve as regional centers of excellence for detecting, analyzing, and exposing disinformation campaigns across member states.
The EDMO initiative, funded with €11 million from the EU budget, will establish eight hubs covering all 27 member states. These centers will bring together academic researchers, fact-checkers, media organizations, and other stakeholders to create a coordinated response to disinformation threats.
“These national hubs represent a crucial step in building resilience against disinformation across Europe,” explained Thierry Breton, Commissioner for Internal Market. “By combining expertise at both national and European levels, we’re creating an ecosystem that can respond quickly to emerging threats while respecting fundamental rights and freedoms.”
The first wave of hubs will be established in Belgium, the Czech Republic, France, Italy, Poland, Spain, and the Nordic countries, with operations expected to begin by the end of the quarter.
Media literacy experts have welcomed these developments, highlighting the importance of both regulatory frameworks and educational initiatives in addressing the complex challenge of disinformation.
“Regulation alone cannot solve the problem,” said Clara Jiménez, co-founder of Maldita.es, a Spanish fact-checking organization. “These national hubs will play a vital role in connecting academic research with practical fact-checking work and media literacy programs, creating a more comprehensive approach.”
The Commission’s dual approach reflects growing recognition that disinformation requires coordinated action across multiple fronts, from platform governance to public awareness campaigns. With the revised Code expected to be finalized later this year and EDMO hubs becoming operational in the coming months, 2023 could mark a turning point in Europe’s battle against false information online.
The initiatives also come as other regions, including the United States and Australia, consider similar regulatory frameworks, suggesting a global trend toward more assertive approaches to platform accountability and digital information integrity.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
Combating online misinformation is a complex challenge, but this EU Code of Conduct seems like a step in the right direction. Hopefully, it will inspire similar initiatives in other regions as well.
Disinformation has become a major threat to democratic discourse, fueling polarization and undermining public trust. This EU initiative could set an important global precedent if executed well.
While I appreciate the intent, I’m a bit skeptical about the ability of platforms to effectively self-regulate. Stronger government oversight and independent auditing may be needed to ensure real accountability.
That’s a fair point. Relying solely on self-regulation has proven insufficient in the past. A co-regulatory approach with clear government guidelines could be more effective.
I’m curious to see how the strengthened Code of Practice will work in practice. Empowering users to detect and report false information seems like a positive move, but the details on enforcement will be crucial.
Agreed, the devil will be in the details. Effective implementation and consistent enforcement across platforms will be critical for success.
This is an important step to address the growing problem of online disinformation. Platforms need to take more responsibility in monitoring and removing harmful content. Transparency and accountability will be key to rebuilding trust.