Justin Sullivan/Getty Images
show image

Roger Taylor

Chair, Centre for Data Ethics and Innovation

Let harmful content run loose, or rein in free speech online? There’s another way forward

Online targeting systems – which promote content in social media feeds, recommend videos, target adverts, and personalise search engine results – are immensely powerful. With 500 hours of video uploaded to YouTube and over 350,000 tweets posted every minute, algorithmic systems have been developed to understand and predict our preferences and behaviours, and use that insight to show us the content that is most likely to get and keep our attention.

That can be good. Targeting systems make the seemingly bottomless internet navigable. And our research shows that people genuinely value them. But they also give the major online platforms like Google and Facebook the power to shape what we see and do on the internet.

If an algorithm understands what makes you tick, it can use that information to influence you. It can keep you online by showing you just one more thing which hooks you in for longer. In the race to keep your attention, it could nudge you towards ever more sensational and extreme content. In doing so it might increase the spread of harmful content across a large population: in 2018, Daesh/ISIS content was regularly seen tens of thousands of times before it was removed from YouTube. It could encourage social fragmentation: my online experience is not the same as yours. I see the things I like, you see the things you like, and we might both be being pushed further in each direction.

Very little is known about the scale and scope of the issues caused by these algorithms, and most studies are contested. But if they pose such significant risks, shouldn’t we make sure we understand what’s really going on; and how our society is being affected? How else can we – as a society – hope to navigate the tricky line between free speech and protection from harm online?

This tension is a key part of what the Centre for Data Ethics and Innovation has been exploring this year. Our recommendations published this week aim to take proportionate measures to improve online safety while protecting freedom of expression, privacy, and enabling innovation.

First, platforms should be made accountable – not for the content they host, but for how they recommend and target it. The government’s proposed online harms regulator can help to do this by addressing systemic risks, ensuring that platforms have proper processes in place to understand the risks their products create, and act to minimise them. The world will be looking at the UK’s approach and it is vital that the regulator has duties to protect human rights such as freedom of expression and privacy.

This won’t work without transparency. The major online platforms should host publicly accessible archives for high risk adverts like political advertising, and for opportunities like jobs where targeting could lead to discrimination – so civil society and regulatory bodies can scrutinise who is being targeted with which content.

But we also need to be able to find out what online targeting systems are doing to society. Internet addiction, negative impacts on mental health, political polarisation and echo chambers are all risks – but how significant are they? Are the hazards becoming harms? The data that we need to find out, and develop proportionate policy responses to, is locked away within platforms’ walled gardens. The regulator should facilitate independent academic research into these and other issues of significant public interest and, to do so, it should have the power to require online platforms to give independent researchers secure access to their data. This must be done in a way that preserves users’ privacy and platforms’ commercial confidentiality.

Regulation won’t solve everything. People have been promised control over their data and their online experiences. But at the moment, they don’t have it.

That is why we believe that equal attention must be paid to addressing how we might encourage the development of different types of targeting systems. Government can play an important role here. For many years, innovators have attempted to redesign the way online targeting works, to ensure it really serves the interests of users. The public like this idea – they can see that effective use of targeting technology can help them manage their health, their finances and their time more effectively. Ideas of this sort have met with limited success in the market. But over the longer term, with the right policies, that could change.

With these changes, we can ensure that targeting is safer, transparent, and puts people in control.

Roger Taylor is chair of the Centre for Data Ethics and Innovation