Claire Gill reviews the long-awaited Online Safety Act which has now received Royal Assent. The government has heralded it as a flag-ship piece of legislation, declaring:
“Britain makes internet safer…This major milestone means the government is within touching distance of delivering the most powerful child protection laws in a generation, while ensuring adults are better empowered to take control of their online lives, while protecting their mental health.”
The primary objective of the Act is to shore up protection against illegal and harmful content that individuals may encounter online and in particular on sites that publish user-generated content. Content that is legally published can be caught by the Act if it is harmful to children – for example content that promotes bullying or encourages eating disorders. The Act introduces duties on social media platforms and internet search services to remove illegal and harmful content quickly and to take pre-emptive measures to prevent children from accessing harmful content in the first place. It creates a series of new regulatory powers vested in OFCOM and new criminal offences. OFCOM decisions will be subject to appeal to the Upper Tribunal.
There is no doubt that the introduction of the Act will be a major headache for social media giants. But it will also affect much smaller services; forums and blogs. These companies will need to implement new processes to give effect to the requirements of the Act.
It will not be long before the Act takes full effect, but we await secondary legislation on the categorisation and register of the high-risk services that will be bound by more stringent rules and duties.
The Act creates a raft of new regulatory powers vested in OFCOM
The provisions of the Online Safety Act apply not only to UK providers of content but also apply to providers based outside the UK if targeted towards or used by UK users. So the US social media giants stand to have their processes scrutinised by the UK regulator and liable to fines of up to £18m or 10% of global annual turnover. The extra-territorial effect of the Act, akin to the General Data Protection Regulation (GDPR) aims to overcome the international digital fragmentation that currently exists in relation to content moderation online.
OFCOM is steeling itself for a dramatic upturn in its workload as it becomes the enforcer of these new rules. It will hold the power to issue these swinging fines, and even make service cessation orders. It will have duties including to maintain registers of certain regulated service providers and to produce codes of practice and guidance, some of which is expected to be published soon after commencement of the Act.
It is not clear yet how exactly the new laws will be enforced, and the government is taking a “phased approach” to bring it into force, to allow service providers time to get to grips with the guidance and to conduct the necessary risk assessments and changes.
Protecting Free Expression
Following concerns about the risk of censorship, the Act includes provisions to protect content published by recognised news publishers and wider journalistic content. Platforms will need to notify news publishers and offer a right of appeal before removing or moderating content. News publishers’ content is not covered by the Act.
The Act requires services to have particular regard to the importance of protecting users’ rights to freedom of expression, privacy and rules relating to the processing of personal data when deciding on and implementing safety measures and policies; this essentially puts the onus on service providers to balance the rights of users who post content and users who access it.
Quicker and more efficient removal of illegal or harmful content
Although ‘adequate’ measures will have to be taken by service providers to reduce the risk of exposure to harmful content, there is no new obligation imposed on services to monitor content to ensure that only lawful content is published, although the obligation is a reactive one, triggered by receipt of a complaint. The risk of being caught with liability for hosting or indexing illegal content ought to make platforms more efficient in how they handle removal requests. Potential liability kicks in once the platform is on notice of illegal content and then fails to remove it expeditiously. The Act itself requires regulated services to put in place systems that allow people to report content easily. However, it remains to be seen how ‘adequacy’ is determined, and the Act does not affect the prohibitions on the general monitoring obligation afforded by the E-Commerce Directive 2000/31/EC.
Will the Act help prevent the spread of disinformation – that is, the deliberate dissemination of false information which may be harmful to the rights of individuals?
The extent of obligations means that there will be an overhaul within the platforms and a renewed focus on the processes in place to protect users generally, including their “take-down” procedures and a revision of their Terms of Service. Where disinformation is spread as part of a malicious hate campaign and amounts to harassment, it may be caught by the Act, as may fraudulent advertising, but lines can blur between what amounts to disinformation and misinformation – that is information that is just plain wrong – and the Act does not impose new obligations on platforms to remove inaccurate content.
Will the Act help adults remove material about them online that infringes their rights – for example to remove sexually graphic pictures of them that have been uploaded without their consent?
The Act will apply to illegal content, for example revenge porn (the offence of disclosing private sexual photographs and films). The protections against ‘legal but harmful’ content for adults have been weakened as the Act has developed.
Will the Act counter any of the concerns about increased availability of AI created content?
While the Act is described as making Britain’s internet safer, it does not specifically address AI created content. Only content which falls within the remit of the Act because it is illegal or harmful will be affected.
A new dawn?
This legislation has the potential radically to change the landscape of access to harmful online content, especially for children in the UK. It will add another layer of protection and provide options to those affected by harmful content to complain to OFCOM if service providers do not grant requests to remove content.
The Act looks likely to raise various legal questions including regulatory challenges and difficulties with the balance of censorship and privacy against freedom of expression.
Whether the Act succeeds in its laudable aim remains to be seen, and much will depend on its operation in practice.