OLI SCARFF/AFP/Getty Images
show image

Jacob Ohrvik-Stott

Researcher, Doteveryone

Will the government’s plan to tackle “online harms” work?

These are worrying times for the tech giants – and for the people that use their products.

In January, the father of the teenager Molly Russell put a face to social media harms when he blamed her suicide on the stream of self-harm content she was exposed to on Instagram. Two months later, the Christchurch massacre was live-streamed for 17 minutes before Facebook reacted to take it down, too late to stop the footage irretrievably spreading to all corners of the web. Last week, Google’s AI ethics council, formed to act as a check on the company’s decisions, took less than a week to fall apart following backlash over the appointment of a hard-right climate-change sceptic to the group.

This is only some of a dismal track record that’s prompted the tech sector to acknowledge it must do more to protect society from its products’ social side-effects.

Mark Zuckerberg raised eyebrows (in both surprise and cynicism) when he pleaded for stronger global regulation of internet content. And new research from Doteveryone has found nearly half of those working in the UK tech sector (45 per cent) think the industry is regulated too little, with only 14 per cent saying that it’s regulated too much.

Coming from an industry that prides itself on disruption, this shift feels significant.

There may be agreement on the need for regulation, but there’s little sign of action. The US shows no inclination to shackle Silicon Valley and China is focused on developing its own unique brand of hyper-surveillance technology. So the UK’s policymakers are not waiting around for the global consensus advocated by Zuckerberg to emerge.

In Monday’s online harms white paperthe government has announced that large digital platforms will be subject to a legal “duty of care”, funded through a crowd-pleasing social media levy and potentially backed up by a new regulator.

The proposal is simple and sensible: platforms won’t be responsible for every harm that occurs on their services, but they must commit to improving the overall safety of the digital realms they preside over. If they don’t, they’ll face hefty fines, and even potential prison sentences for executives.

The pragmatic appeal of a duty of care has led politicians of all stripes to back it, with others including the Children’s Commissioner, College of Policing and the NSPCC also throwing their weight behind it over the past few months. But with the finer details still to be decided, there are plenty of potential pitfalls and the government must take care with a digital duty of care.

Faced with the threat of tough penalties for not removing enough harmful content, platforms may become a little too eager to remove people’s posts. From a safety perspective this is no big deal, but when looked at through the lens of censorship it is. Individuals – and communities – must be able to appeal take-down decisions to a trusted independent body to avoid social media companies becoming the arbiters of freedom of speech.

Though tech companies can’t be shown too much leniency (and today’s white paper is definitely formulated to look as though it’s tough on big tech) regulators will need to use carrots as well as sticks. What a “good” level of online safety looks like will be unclear in the early days. The new regulator must make sure it doesn’t foster competition between companies to top a harm reduction league table. Instead it must incentivise them to design their products responsibly to avoid harm in the first place and to share their learnings with others. If Twitter creates a new tool to curb hate speech for example, Reddit should use it too.

A digital duty of care is unprecedented and unfamiliar, and a new regulator will need a credible and relatable leader to reassure an anxious public that their concerns are being met. The Information Commissioner Elizabeth Denham – who recently called out Facebook’s hypocrisy for continuing to appeal their Cambridge Analytica scandal fine while pivoting to support stronger data privacy regulation – is an excellent example to follow.

And though a well-implemented duty of care could be a great step towards tackling harmful content, policymakers must not assume that their work on digital regulation ends there. As the House of Lords Communications Committee reported last month, the need for regulation goes beyond online harms. We need to move from reacting to the latest tech outrage in an ad-hoc and piecemeal way, towards a forward-looking regulatory approach that asserts the public’s values in a digital age.

Preparing society for mass automation, working out how to govern facial recognition technology and developing a digital tax system that actually works are just a few things that need urgent attention. The tech sector has set its own rules for too long. Now it’s time for the government to lead the way. A duty of care is a good start – but only the first step on that road.

Jacob Ohrvik-Stott is a researcher at Doteveryone, an independent think tank that champions responsible technology