Listen to the article
Brazil’s Supreme Court ruled on Thursday that digital platforms must immediately remove hate speech and content promoting serious crimes, establishing a landmark decision on Big Tech’s liability for illegal posts.
The ruling, supported by eight of the 11 justices, partially overturns the 2014 Internet Civil Framework which previously shielded platforms from liability unless they refused to comply with a court order to remove questionable content. Now, platforms face immediate responsibility for certain categories of harmful material.
This decision represents one of the most aggressive regulatory stances against social media companies in Latin America. Under the new interpretation, platforms must proactively remove content promoting anti-democratic actions, terrorism, hate speech, child pornography and other serious crimes without waiting for court orders. For other illegal content, companies may still be held liable for damages if they fail to remove it after being notified by users or third parties.
“We preserve freedom of expression as much as possible, without, however, allowing the world to fall into an abyss of incivility, legitimizing hate speech or crimes indiscriminately committed online,” wrote Justice Luis Roberto Barroso, the court’s president, in defense of the majority opinion.
The decision is expected to intensify already strained relations between Brazil’s judiciary and major technology companies. Brazil made international headlines last year when Supreme Court Justice Alexandre de Moraes ordered Elon Musk’s platform X (formerly Twitter) offline for 40 days over disinformation concerns. That unprecedented action sparked heated debate about the boundaries between combating online harm and protecting free expression.
Not all justices agreed with the new ruling. Justice Kassio Nunes, among three dissenting votes, argued that “civil liability rests primarily with those who caused the harm” rather than with the platforms themselves. This position aligns more closely with the tech industry’s traditional stance that they should serve as neutral intermediaries.
Brazil’s approach stands in stark contrast to many other jurisdictions, including the United States, where Section 230 of the Communications Decency Act broadly shields platforms from liability for user content. The European Union has taken a middle ground with its Digital Services Act, which increases platform responsibility without removing all safe harbors.
For social media giants like Meta (parent company of Facebook and Instagram), Google (owner of YouTube), and X, this ruling creates significant operational challenges. Companies will now need to dedicate additional resources to content moderation in Brazil and may need to implement more aggressive automated detection systems specifically for this market.
Digital rights advocates have expressed concern that the ruling could lead to over-censorship, as platforms may remove borderline content to avoid potential liability. Meanwhile, supporters of stronger regulation argue that social media companies have historically failed to adequately address harmful content without legal pressure.
The decision comes amid a global trend of increased regulatory scrutiny of social media platforms. Countries worldwide are grappling with how to balance free expression against the real-world harms caused by online content, including election interference, violence, and discrimination.
Brazil’s position as Latin America’s largest internet market means this ruling could influence regulatory approaches throughout the region. Other Latin American nations have been closely watching Brazil’s confrontations with tech companies as they develop their own digital governance frameworks.
The full implementation details of the ruling remain to be clarified, including specific timelines for content removal and how “immediate” action will be defined in practice. Nevertheless, this decision marks a significant shift in Brazil’s approach to holding digital platforms accountable for the content they host.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
While I understand the intent behind this decision, I’m a bit concerned about the potential for abuse and over-censorship if the rules are too broad or not properly defined. Striking the right balance will be crucial.
It’s good to see Brazil taking a strong stance against harmful online content. However, the success of this policy will depend on clear guidelines, consistent enforcement, and effective appeals processes.
While freedom of expression is important, it should not come at the expense of public safety. This decision strikes a reasonable balance by requiring platforms to remove specific categories of clearly illegal content.
Agreed. Platforms can’t hide behind the ‘free speech’ argument when it comes to enabling serious crimes and violence.
I’m curious to see how this new liability model will impact social media companies’ content policies and moderation practices in Brazil. Will they err on the side of over-removal to avoid legal risks?
That’s a good question. Platforms may have to significantly expand their moderation teams and use more automated tools to comply with the new rules.
This is an important ruling to hold social media platforms accountable for harmful content on their sites. Proactive content moderation is crucial to prevent the spread of extremism, hate speech, and other illegal activities online.
From a business perspective, this new liability could significantly increase compliance costs for social media platforms operating in Brazil. It may force them to re-evaluate their investment and strategy in the market.
This ruling could set an important precedent for other countries looking to rein in the power of Big Tech. It will be interesting to see if similar liability laws are adopted elsewhere.