Loading Logo

Watching what we say: Online content regulation in the uk

November 2020
 by Digitalis

Watching what we say: Online content regulation in the uk

November 2020
 By Digitalis

In April 2020, BT chief executive Phil Jansen reported that 39 of his engineers had recently been assaulted by conspiracy theorists who imagined a connection between 5G networks and the spread of Covid-19. Over the same period, 80 separate attacks were reported on UK mobile network infrastructure. Of the 2,000 respondents to a 2020 Ofcom survey, half claimed that, in the past week alone, they had encountered disinformation online that made fantastical assertions as to the origins of Covid-19. Just as the suicide of 14-year-old Molly Russell ushered a powerful reminder of the vulnerability of young children on online platforms that accommodate graphic content, recent events have exposed how perpetrators can easily exploit these platforms to tap into our natural propensity for wishful thinking, confirmation bias, inattention, and groupthink.

There is growing popular support for the increasing regulation of online content on social media platforms. An Ofcom report from 2019, aptly titled ‘Online Nation’, found that as many as 70% of adults in the UK would support additional regulations. The size of the FANG and other companies – Facebook has 45 million users in the UK, Instagram 24 million and Twitter 14 million – has contributed to growing popular support for more robust controls over what we see and read. 

By contrast to the relatively stringent regime that governs radio and television broadcasting, the regulation of digital content platforms in the UK is remarkably sparse. During an address to the Royal Television Society in September 2018, former OfCom chief Sharon White (now CEO at John Lewis) lamented a ‘standards lottery’. Although there is statutory law that encompasses anticompetitive behaviour, advertising practices and data protection, there is minimal legislative oversight of user-generated online content. At present, the EU e-Commerce Directive of 2000 is the central authority. Mirroring the much-maligned Section 230 of America’s Communications Decency Act of 1996, the directive classes user-generated content platforms as ‘neutral, merely technical and passive’ intermediaries, rather than direct publishers. As a consequence, their only obligation under the law is ‘to act expeditiously to remove or to disable access to’ illegal content that has been brought to their attention. Article 15 of the e-Commerce Directive explicitly prohibits the UK government from introducing measures that compel such platforms ‘to monitor the information which they transmit or store’. In other words, current legislation enforces a ‘notice and take down’ approach – and solely in the context of illegal content. As for activities that are “legal” but harmful, such as intimidation, cyberbullying and disinformation, online content platforms are, effectively, at liberty to regulate themselves. 

Although Brexit may make it easier for the UK government to depart from the law of the European Union, the government has made clear that there are ‘no current plans to change the UK’s intermediary liability regime or its approach to prohibition on general monitoring requirements’. At least for the time being, the UK is set to continue to operate under the auspices of decades-old legislation originally introduced to facilitate the expansion of the fledgling internet during the dot-com boom. 

But movement, whilst slow, is happening. In April 2019, the ‘Online Harms White Paper’ (OHWP) was published by the Department for Digital, Culture, Media & Sport and the Home Office. Spurred by aspirations to make the UK the ‘safest place in the world to go online’, the OHWP advocated for a new regulator to supervise the activities of ‘social media platforms, file hosting sites, public discussion forums, messaging services and search engines’. Among proposals for transparency reports and more austere penalties for non-compliance, the OHWP devoted a large part of their report to a new statutory duty of care that would ensure online content platforms ‘take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services’. If, as is likely, the scope of the new measures bears any resemblance to Germany’s NetzDG, only internet users based in the UK will be able to take advantage of the additional protections; it is also likely that the measures will apply exclusively to those platforms that command a significant usership.

Public consultation on the OHWP, which ran from 8 April 2019 to 1 July 2019, attracted 2,400 submissions, and the government’s response to the consultation, spearheaded by Baroness Morgan and Priti Patel, was published in February 2020. Media and some public concern viewed it as the stifling of free speech, and the government response pared back the detail of the new duty of care. Proposals now state that relevant companies will need to ensure that ‘illegal content is removed expeditiously’, but they will retain full autonomy as to ‘what type of legal content or behaviour is acceptable on their services’. Plans for the online harm’s regulator – i.e. OfCom – to draw up ‘codes of practice relating to all forms of abusive behaviour online’ have since been moderated. Proposals now state that ‘we do not expect there to be a code of practice for each category of harmful content’. Diluted ambitions aside, much of the material remains unclear (not least, which organisations will actually be caught by the regulatory changes), and the interim report on the OHWP is consciously framed as an ‘iterative step’ in an ongoing debate.

Writing in the Evening Standard in 2018, Matt Hancock declared that ‘the days of an unregulated Wild West are over’. Hancock’s move to the Department of Health, and his absorption by the pandemic, has understandably deflected his interest. In May 2020, Minister Caroline Dinenage indicated to the Home Affairs Committee that a full response to the OHWP, to supplement the interim findings from February, would ‘probably’ emerge in the autumn. It has not done. Wary of the optimism trap, Chair of the Lords Democracy and Digital Committee, Lord Puttnam, has suggested that a functional bill may not enter the fray until 2023 or 2024.

Legislators will be negotiating the delicate balance between freedom of expression and online safety. There will be further calls for assurance that the cure is not worse than the disease – that the greater regulatory burden will not incentivise risk-averse behaviour from online platforms; in particular, frequent and heavy-handed content takedowns. Not only will stricter time pressures evoke human error, but some perceive a danger that online content platforms will leverage imperfect AI and machine-learning algorithms to intervene at the scale that new laws demand. In March 2020, Facebook’s head of safety, Guy Rosen, attributed the false flagging of creditable posts about Covid-19 to a ‘bug in an anti-spam system’. Against strong public pressure for change, the problem is that there are no clear regulatory or technical solutions that yet have a consensus.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.