Often a force for good, the internet can also be misused; we cannot ignore the very real harms people face online every day. The Online Safety Bill will introduce ground-breaking laws to protect children online and tackle the worst abuses on social media, including racist hate crimes.
Notably, the Government has prioritised additional illegal offences to be written on the face of the Bill. This removes the need for them to be set out in secondary legislation, allowing faster enforcement action against tech firms which fail to remove the named illegal content. These offences include but are not limited to: revenge porn; hate crime; fraud; weapons offences; the promotion or facilitation of suicide; people smuggling; and incitement to and threats of violence. It is particularly encouraging that the Government will seek to amend the Bill to list controlling or coercive behaviour in recognition of the specific challenges women and girls face when going online.
Ofcom, the UK’s independent communications regulator, will oversee the regulatory regime, backed up by mandatory reporting requirements and strong enforcement powers to deal with non-compliance. These powers include issuing fines of up to ten per cent of annual worldwide turnover to non-compliant sites or blocking them from being accessible in the UK.
The Government has been clear that protecting children and vulnerable people online must not come at the expense of free speech. A large number of my colleagues, stakeholders and members of the public have been particularly concerned about provisions that would result in the over-removal of legitimate legal content by creating a new category of ‘legal but harmful’ speech. However admirable the goal, I do not believe that it is morally right to censor speech online that is legal to say in person. Ministers have therefore quite rightly announced that they will remove 'legal but harmful' from the Bill in relation to adults, and replace it with a fairer, simpler and more effective mechanism called the Triple Shield. This will focus on on user choice, consumer rights and accountability whilst protecting freedom of expression.
Under the Triple Shield, three important rules apply: content that is illegal should be removed; legal content that a platform prohibits in its own terms of service should be removed, and legal content that a platform allows in its terms of service should not be removed; and adults should be empowered to choose whether or not to engage with legal forms of abuse and hatred if the platform they are using allows such content. The 'Third Shield' therefore puts a duty on platforms to provide their users with the functionality to control their exposure to unsolicited content. Crucially, these functions will, under no circumstances, limit discussion, robust debate or support groups’ ability to speak about issues freely.
These changes will ensure the Bill protects free speech whilst holding social media companies to account for their promises to users, guaranteeing that users will be able to make informed choices about the services they use and the interactions they have on those sites.