Listen to the article

0:00
0:00

Polish social media platforms are leaving approximately 20 percent of flagged disinformation content online without any action, according to a comprehensive new study released this week by researchers at the University of Warsaw’s Digital Media Observatory.

The year-long analysis tracked thousands of user reports across major platforms including Facebook, X (formerly Twitter), YouTube, and several domestic Polish social networks. Researchers found that while most platforms had improved their response times to flagged content, a significant portion of harmful disinformation still remained accessible to users.

“The most concerning finding is the consistent gap between reported content and platform action,” said Dr. Marta Kowalski, the study’s lead researcher. “About one-fifth of all posts containing verifiably false information, dangerous health claims, or election misinformation were left online despite multiple user reports.”

The study revealed notable differences in platform performance. Facebook removed approximately 83 percent of flagged disinformation, the highest rate among international platforms operating in Poland. YouTube followed at 79 percent, while X lagged behind at just 68 percent removal. Among Polish platforms, Wykop performed best with an 85 percent removal rate.

Response times also varied significantly, with some platforms taking up to three weeks to evaluate and remove harmful content. This delay period creates what researchers termed a “misinformation window” where false information can spread rapidly before any intervention occurs.

The problem becomes particularly acute during critical periods. “During the weeks surrounding Poland’s most recent parliamentary elections, we saw a spike in both disinformation content and in platforms’ failure to address it,” explained Dr. Kowalski. “The removal rate dropped to just 72 percent during this period, precisely when accurate information was most crucial.”

The study cataloged several categories of disinformation, with false political claims making up the largest portion at 43 percent of reports. Health misinformation accounted for 27 percent, while false economic information and conspiracy theories made up 18 and 12 percent, respectively.

Poland’s Deputy Minister for Digitalization, Jan Nowak, called the findings “deeply troubling” and indicated that stronger regulatory measures may be forthcoming. “While we respect the need for free expression online, platforms operating in Poland must be held accountable for consistently enforcing their own community standards,” Nowak said at a press conference responding to the study.

The European Union’s Digital Services Act, which took effect earlier this year, requires large online platforms to implement more robust systems for addressing illegal content and disinformation. Poland’s digital ministry officials indicated they are closely monitoring platform compliance with these regulations.

Industry response to the study has been mixed. Facebook parent company Meta issued a statement highlighting its investment in Polish-language content moderation, noting a 15 percent increase in moderators covering the region over the past year. “We take our responsibility to combat misinformation seriously and are constantly improving our detection systems,” the statement read.

X did not respond to requests for comment, while YouTube pointed to its recently expanded fact-checking partnerships with Polish media organizations.

Digital rights advocates have used the findings to call for greater transparency. “Platforms need to provide much clearer data on their moderation practices and decision-making processes,” said Anna Wilk, director of the Warsaw-based Digital Rights Coalition. “Without transparency, we can’t properly address the gaps identified in this important research.”

The study also found that user education plays a crucial role in effective content moderation. Reports from accounts with a history of accurate flagging were 63 percent more likely to result in content removal than those from first-time reporters.

Researchers recommend that platforms improve their automated detection systems, increase human moderation resources for Polish-language content, and develop more efficient appeals processes for both reporters and content creators.

The University of Warsaw team plans to expand the research to other Central European countries next year to enable regional comparisons of platform performance in addressing harmful content.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. Lucas Hernandez on

    This study highlights the ongoing battle against the spread of online disinformation. While progress has been made, there’s clearly still a lot of work to be done. Platforms must remain vigilant and continue enhancing their content moderation capabilities.

  2. Worrying that so much disinformation is still slipping through the cracks. Social media platforms need to do more to proactively identify and remove blatantly false content. Reliable information is crucial, especially around important issues like elections and public health.

    • Agreed. Platforms need to invest more resources into content moderation and strengthen their algorithms to better detect and remove disinformation.

  3. Oliver Thompson on

    I wonder what types of disinformation are most commonly reported and ignored. Is it more political in nature or related to public health/safety? Understanding the patterns could help guide more targeted solutions.

  4. William Lopez on

    The differences in platform performance are interesting. I wonder what factors contribute to the variation – is it related to resources, algorithms, or corporate priorities? More research is needed to understand the drivers behind these disparities.

  5. Amelia Martinez on

    Interesting to see the differences in platform performance. Facebook seems to be the most proactive, which is encouraging. However, the overall 20% rate of ignored reports is still concerning. More transparency and accountability is needed.

    • Oliver Rodriguez on

      Good point. Platforms should publish detailed data on their content moderation efforts and share best practices to improve industry-wide standards.

  6. As a social media user, I find it concerning that any flagged disinformation is allowed to persist. Even a 20% rate is too high. Platforms need to be held accountable and face consequences if they fail to promptly remove harmful content.

    • Elijah Martinez on

      I agree. Stronger regulatory oversight and financial penalties could incentivize platforms to prioritize disinformation removal and improve their processes.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.