The Online Safety Bill is returning to Parliament after a long pause, both with stronger protections for children and vulnerable groups, and without the controversial provisions relating to "legal but harmful content", which the Government believes would threaten free speech. Those provisions have been replaced with a "triple shield" of protections for online users: social media firms will be legally required to remove illegal content; they will have to remove material in breach of their own terms of service; and they must provide adults with greater choice over the content they see and engage with.
Via a series of amendments – some already tabled, with some to be made later - the Government has made changes in the following key areas:
Child safety duties
- Social media platforms will have to publish their risk assessments on the dangers their sites pose to children. Previously, the Bill required platforms to carry out these assessments but not to publish them proactively.
- The new internet regulator, Ofcom, will be able to compel companies to publish details of any enforcement notices they issue against them for breaching their safety duties under the Bill.
- Platforms will have more defined responsibilities to provide age-appropriate protections for children. For example, where platforms specify a minimum age for users, they will now have to set out and explain clearly in their terms of service the measures they use to enforce this, such as age verification technology.
"Legal but harmful" provisions
In response to concerns that targeting "legal but harmful" content would create a broad category of quasi-legal content that would curb legal free speech, the Government has redrawn the Bill to impose overarching transparency, accountability and free speech duties on Category 1 services (those that are the highest-risk and highest-reach):
- Companies will not be able to remove or restrict legal content, or suspend or ban a user, unless the circumstances breach their terms of service, or the law. As private companies, tech platforms will remain free to set "any terms of service they wish" but must now "keep their promises to users and consistently enforce their user safety policies once and for all". Where a business fails to act in line with its terms of service, Ofcom will have the power to levy unprecedented fines of up to 10% of turnover.
- The Government has added to the Bill's existing list of criminal activity and illegal content online – including harassment and stalking, the sale of illegal drugs or weapons and revenge pornography - a new criminal offence of assisting or encouraging self-harm online.
- Category 1 services will have to provide users with tools to tailor their online experience, for example to block anonymous trolls as well as certain types of content. They will also be required to provide better reporting mechanisms, and to process and resolve complaints more quickly.
- A future set of amendments will boost protections for women and girls online by adding the criminal offence of controlling or coercive behaviour to the Bill's list of priority offences.
Harmful communications offence
Similarly, the Government has taken on board concerns that the harmful communications offence contained in the previous draft of the Bill could inadvertently criminalise lawful and legitimate speech, on the basis that it has merely caused offence.
The offence has therefore been removed, and the Government will instead retain elements of the Malicious Communications Act and s.127 Communications Act that were due to be repealed, so that the criminal law continues to protect people from harmful communications, including racist, sexist and misogynistic abuse.
Comment
Unsurprisingly, this new version of the Bill has failed to please both those pushing to keep in scope as much harmful content as possible, and those concerned to protect free speech. What is clear is that the Bill has been retargeted to address specific, illegal harms. The onus of addressing non-legal but harmful content has shifted from tech companies to users, who will be expected to manage their own online experience more actively. Tech companies are to be more closely held to account for policing their terms of service, but they will still set the terms themselves. How well they will mark their own homework, and how far these new duties will prompt platforms to monitor and clean up content in a broader sense, remains to be seen.