Listen to the article
Britain’s Online Safety Act: A Comprehensive Approach to Tackling Digital Harms
Britain’s landmark Online Safety Act, which passed into law in October 2023, is now being implemented in stages as part of an ambitious effort to create a safer internet environment for all users, with special protections for children.
The comprehensive legislation imposes new responsibilities on social media platforms, search engines, and other online services that allow users to post content or interact with each other. These duties apply regardless of where companies are headquartered, provided they have links to the UK through significant user numbers or by targeting the UK market.
At the heart of the Act is a framework requiring platforms to implement systems that reduce the risk of illegal content appearing on their services and to remove such content promptly when it does appear. The regulations cast a wide net, addressing criminal content including child sexual abuse, terrorism, fraud, hate crimes, and intimate image abuse.
Ofcom, Britain’s communications regulator, has been appointed as the enforcement authority with substantial powers to ensure compliance. Companies failing to meet their obligations face fines of up to £18 million or 10 percent of their global revenue, whichever is greater. In extreme cases, Ofcom can require payment providers, advertisers, and internet service providers to sever ties with non-compliant platforms.
The implementation timeline follows a phased approach. Duties regarding illegal content are already in effect as of March 2025, with platforms required to have completed risk assessments. Protections for children are being rolled out next, with the child safety regime expected to be fully operational by summer 2025.
“The strongest protections in the Act have been designed for children,” explains a government spokesperson. “Platforms will be required to prevent children from accessing harmful and age-inappropriate content and provide parents and children with clear ways to report problems online.”
The Act takes a particularly firm stance on protecting children from harmful but not necessarily illegal content. Services likely to be accessed by children must prevent young users from encountering primary priority content such as pornography, suicide encouragement, self-harm material, and content promoting eating disorders. They must also provide age-appropriate access to priority content including bullying, abusive material, violent imagery, and content depicting dangerous challenges.
To enforce age restrictions consistently, platforms must specify what measures they’re using and apply them uniformly. This represents a significant shift from the current landscape where many services have nominal age restrictions that are weakly enforced.
For adult users, the legislation balances safety with freedom of expression. Large platforms (designated as Category 1 services) must provide adults with optional tools to reduce exposure to certain types of legal but potentially harmful content, including material promoting self-harm or containing abusive or hateful speech.
The Act also introduces several new criminal offenses that came into effect in January 2024, including encouraging serious self-harm, cyberflashing, sending false information intended to cause harm, threatening communications, intimate image abuse, and epilepsy trolling. Convictions have already been secured under some of these new provisions.
Market analysts suggest the Act could reshape how major tech platforms operate in Britain, with potential impacts on content moderation practices, recommendation algorithms, and user experience design. Smaller platforms may face particular challenges in meeting compliance requirements, though the legislation includes provisions to ensure proportionate enforcement.
Industry experts note that the UK’s approach represents one of the most comprehensive attempts globally to regulate online harms. Unlike the EU’s Digital Services Act, which focuses primarily on illegal content and transparency, the UK legislation directly addresses legal but harmful material, particularly for children.
The Act also tackles algorithm-driven harm by requiring platforms to consider how their recommendation systems might expose users to illegal content or children to harmful material. Categorized services must publish annual transparency reports detailing the algorithms they use and their effects on user experiences.
As Ofcom continues to develop codes of practice and guidance documents throughout 2025 and 2026, online platforms face a period of significant adjustment to meet their new legal obligations. The full impact of this pioneering legislation will become clearer as implementation progresses across all sectors of the digital economy.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


13 Comments
This Act casts a wide net, covering a range of illegal and harmful content. The powers given to Ofcom as the enforcement authority are substantial, so it will be interesting to see how they are applied in practice.
Yes, the scale and scope of the regulations is ambitious. Careful monitoring of how the Act is implemented and its real-world impact will be crucial.
Tackling issues like child sexual abuse, terrorism, and intimate image abuse online is critical. However, the potential for overreach or censorship is a valid concern that will need close scrutiny.
Absolutely, striking the right balance between safety and civil liberties is the key challenge. Transparency and accountability from Ofcom and the platforms will be essential.
The UK’s Online Safety Act sounds like a comprehensive approach to addressing digital harms and protecting vulnerable users. It’s important to strike the right balance between online safety and free expression.
Agreed, this legislation aims to create a safer internet environment while still preserving core democratic principles. Enforcement and implementation will be key.
As a global tech hub, the UK’s approach to online safety regulation will be closely watched. Getting the implementation right, while respecting human rights, will be no easy task.
Indeed, the UK’s leadership on this issue could set an important precedent for other countries grappling with similar challenges. Careful, evidence-based policymaking will be crucial.
The broad scope of the Act, covering everything from child abuse to fraud, demonstrates the scale of the online harms challenge. Effective enforcement and oversight will be critical to its success.
Imposing obligations on platforms regardless of their location is an interesting approach. This could set a new global standard, but may also raise jurisdictional and enforcement challenges.
The requirement for platforms to implement systems to reduce illegal content is ambitious. Ensuring these systems are effective and don’t infringe on legitimate expression will be a delicate balancing act.
Giving Ofcom substantial powers to ensure compliance is a bold move. Their ability to effectively wield these powers will determine whether the Act achieves its intended goals.
The inclusion of intimate image abuse in the Act’s scope is a welcome step. Addressing the non-consensual sharing of private images is crucial for protecting vulnerable individuals.