Listen to the article

0:00
0:00

European Union Crafts Unique Approach to Combat Disinformation in the Digital Age

Growing concerns about disinformation have intensified in recent years, particularly surrounding the 2024 elections. These fears have been fueled by various cases including “disinformation for hire” about vaccinations, information warfare strategies highlighted by the Ukrainian conflict, political propaganda, populist narratives, and the rising use of generative artificial intelligence applications like Sora.

The connection between disinformation and democratic discourse has prompted leading constitutional democracies to address the spread of online falsehoods, especially after watershed moments like the Brexit referendum and the 2016 U.S. presidential elections. While countries like France and Brazil have adopted specific regulatory measures, and judicial responses have followed in places like Romania, other constitutional systems—particularly the United States—have largely stepped aside, even as generative AI applications continue to proliferate.

This fragmented response primarily stems from differing views about the constitutional relevance of disinformation. Even when AI technologies are involved in producing fabricated content, addressing disinformation fundamentally comes down to understanding freedom of expression’s role in a democratic society—a right that receives varying degrees of protection across different constitutional systems.

These constitutional perspectives are also reflected in responses to the transformation of the marketplace of ideas, which appears far from free in the digital age. The situation becomes even more complex when considering the power of transnational private actors, primarily online platforms, to make decisions about digital content. By relying on automated systems for content moderation, these entities effectively govern digital spaces by determining what content—including disinformation—remains visible online.

The European Union has demonstrated awareness of this intertwined scenario of manipulated content and private governance through the adoption of the Digital Services Act (DSA), the Strengthened Code of Practice on Disinformation, and the Artificial Intelligence Act (AI Act). Combined with other initiatives like the Regulation on Transparency of Political Advertising and the European Media Freedom Act, these tools aim to address disinformation not by regulating speech directly, but by targeting the dynamics affecting its circulation, primarily focusing on online platforms and strengthening public-private cooperation.

The EU’s approach creates a unique constitutional strategy that doesn’t focus on content regulation but instead addresses the underlying dynamics of disinformation spread, particularly through online platforms and AI systems. Rather than pursuing a self-regulatory or illiberal approach, the Union proposes a hybrid strategy based on procedural safeguards, risk regulation, and co-regulation involving public and private collaboration.

“The Union is providing a model to address disinformation which does not focus on content regulation but on dealing with the dynamics characterizing the spread of disinformation,” notes one analysis of the European approach. This method represents a significant departure from both the American model, which broadly protects speech under the First Amendment, and the more restrictive approaches seen in some Asian countries.

The European strategy’s “hard way” includes procedural safeguards and risk regulation through legal instruments. The DSA, while maintaining liability rules for online intermediaries, increases transparency and accountability requirements for platforms to mitigate societal risks. It requires online platforms to consider fundamental rights when enforcing their terms of service, introduces substantive and procedural safeguards in content moderation, and implements crisis protocols for extraordinary circumstances affecting public security or health.

For very large platforms, the DSA mandates annual risk assessments of systemic issues, including disinformation, and requires the implementation of reasonable mitigation measures. This risk-based approach leads to a more flexible enforcement system that prioritizes action based on actual hazards rather than prescriptive rules.

The AI Act introduces additional protections by banning AI systems designed to manipulate individuals and imposing obligations on “high-risk” systems, including those used to influence election outcomes. It also requires transparency for deepfakes, mandating that AI-generated content be clearly marked and detectable as artificial.

Complementing this regulatory framework is the European “soft way,” which focuses on co-regulation and trust-building. The Strengthened Code of Practice on Disinformation represents a shift from the Commission’s earlier self-regulatory approach, which proved disappointing due to vague obligations and lack of verification criteria.

The new code emphasizes constitutional values, transparency in content monetization, security against hidden disinformation tactics, user empowerment, and researcher access to platform data. It involves a diverse range of stakeholders—not just platforms but also civil society representatives, fact-checkers, advertising companies, and regulatory bodies—creating a dialogue-based mechanism for collaboration between signatories and the Commission.

This dual approach—combining hard regulation with co-regulatory instruments—represents a uniquely European solution that balances free speech with other democratic values. By focusing on the ecosystem that drives disinformation rather than the content itself, the EU has created an enforcement system based on public-private collaboration that neither relies solely on self-regulation nor imposes overly restrictive measures.

However, challenges remain. The extensive reliance on risk regulation and co-regulation raises questions about accountability and transparency in decision-making. The broad nature of risk-based obligations may create uncertainty, while the collaborative approach could potentially blur the lines between public and private responsibilities.

Nevertheless, the European strategy demonstrates a sophisticated understanding that fighting disinformation requires neither complete market freedom nor oppressive measures, but rather a relationship of trust and cooperation between public and private actors within a clear regulatory framework. As disinformation continues to evolve alongside AI technologies, this balanced approach may provide valuable lessons for constitutional democracies worldwide.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

12 Comments

  1. Tackling AI-driven disinformation is a critical issue for modern democracies. The EU’s constitutional approach sounds intriguing, though I wonder how it will be implemented and enforced in practice. Will be interesting to see the real-world impacts.

    • Lucas X. Williams on

      Good point. The devil will be in the details when it comes to the EU’s policies. Curious to see how they navigate the balance between protecting free speech and combating online falsehoods.

  2. The connection between disinformation and democratic discourse is concerning. I’m glad to see the EU taking a proactive stance, though the constitutional approach sounds complex. Curious to see how it compares to other regulatory efforts around the world.

    • Agreed. Navigating the balance between free speech and combating online falsehoods is a delicate challenge. The EU’s unique framework could set an interesting precedent, though the real-world impacts remain to be seen.

  3. Jennifer Thomas on

    Disinformation has become a major challenge, especially with the rise of AI. I’m glad to see the EU taking a unique constitutional approach to address this issue. Curious to learn more about the specifics and how it will be implemented.

    • William Rodriguez on

      Definitely an important issue that deserves attention. The EU’s approach sounds like a novel way to tackle the problem, though the effectiveness will depend on the details. Will be interesting to see how it plays out.

  4. Addressing AI-driven disinformation is a critical issue for modern democracies. The EU’s constitutional approach sounds like an innovative way to tackle the problem, though I wonder about the practical implementation and enforcement challenges. Curious to learn more.

    • Isabella Jackson on

      Good point. The effectiveness of the EU’s approach will depend heavily on the details and how it is executed. Will be interesting to see how it compares to other regulatory efforts and how it evolves over time.

  5. Elizabeth Brown on

    The connection between disinformation and democratic processes is concerning. Glad to see leading democracies taking steps to address this, even if the approaches vary. Curious to see how effective the EU’s constitutional framework will be in practice.

    • Absolutely. With the proliferation of generative AI, the disinformation challenge is likely to only grow. Curious to see how the EU’s approach evolves over time and how it compares to other regulatory efforts.

  6. Interesting to see how different countries are approaching the challenge of AI-driven disinformation. Curious to learn more about the EU’s unique constitutional approach and how it balances free speech principles with the need to combat online falsehoods.

    • William Taylor on

      You raise a good point. Striking the right balance between protecting democratic discourse and regulating AI-powered misinformation is a complex issue with valid concerns on both sides.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.