show image

Jim Killock

Executive director, Open Rights Group

Why do politicians still trust Facebook with our fundamental rights?

Watching Mark Zuckerberg’s hearing in the Senate, one thing seemed obvious. While the Cambridge Analytica scandal has shocked the public, many elected officials feel that the problem with Facebook is that it is not spying on people enough.

Facebook has been criticised in the past for failing to help law enforcement get at WhatsApp messages; for allowing criminal content on its platform; and for failing to deal with bullying and abuse. These criticisms, which often lead to demands for Facebook to control content, have been merged with the current concerns about privacy.

For some, the message is clear: the opportunity from the Cambridge Analytica scandal is to push Facebook into patrolling its platform with automated tools, to detect and remove material that worries politicians. In other words, to erode our privacy even more.

After all, as Facebook has done such a great job with our privacy, why shouldn’t we trust them with policing our free speech as well?

The truth is that corporations should not be trusted with defining our fundamental rights, but that is precisely what most politicians want to do. Politicians are giving up on the courts. While they claim that they want “the same rules online as offline” that does not apply to your right to get a judge or magistrate to decide whether you are breaking the law.

Instead, they complain that there is “too much bad material” and that getting the courts or the police involved would be too much work. Instead, Mark Zuckerberg and some clever bots should be rooting out the problems and censoring content as it appears. Or else face fines.

Of course platforms should be ensuring they keep within the law, and that problematic content is removed. They should invest in moderation and kick people that abuse their platform for criminal purposes off.

But we should be wary of the “internet exceptionalism” that some politicians apply to Facebook, when they absolve themselves of responsibility. Facebook does not operate courts or jails; if we place the burden of law enforcement largely on these platforms, then we risk allowing real criminals to escape prosecution and conviction. Instead, the best claim that Zuckerberg can make is that “AI” or machine learning can identify and defend against abusive content.

By and large, abusers of any kind can simply create a new account, post new content and carry on their abuse. They too can automate their posting and learn how the algorithms work, or encourage others to help out; or they can work out the lines so as to abuse without immediately falling foul of rules. Simply speeding up discovery and removal of content is not a solution, it is a cat and mouse game.

Worse still, automated removal risks legal and legitimate content being swept up into the takedowns. We know already that Facebook has arbitrary rules, for instance, removing anything featuring a female nipple as adult. BT, Sky, Virgin and TalkTalk employ adult content filters which routinely block charities, drugs helplines and church websites.

Computers aren’t perfect and they aren’t magic. Above all they are not a court.

If the lesson politicians learn from Cambridge Analytica is that we should hand more power to Facebook, then they really have not been listening.