A High Court ruling that the South Wales Police’s use of facial recognition technology is lawful is being challenged today in the Court of Appeal by human rights group Liberty. The organisation is arguing that the technology breaches human rights laws and is discriminatory.
The organisation is acting on behalf of Cardiff resident Ed Bridges, whose face was scanned in early trials of the surveillance technology. Liberty is arguing that the police failed to meet its obligations under the Equality Act 2010 because it didn’t take into account that the technology is more likely to misidentify women and people of colour; that the tech conflicts with a person’s right to a private life (as dictated by Article 8 of the European Convention on Human Rights); that the scale of infringement of the technology – which has scanned more than 500,000 faces to date – is not proportionate to the task at hand; and that the South Wales Police doesn’t have an adequate policy document in place governing its use of facial recognition technology.
“We should all be able to use our public spaces without being subjected to oppressive surveillance. For three years now South Wales Police has been using facial recognition against thousands of people, without our consent and often without our knowledge,” said Bridges.
“This technology is an intrusive and discriminatory mass surveillance tool and I’m optimistic that the court will agree that it clearly threatens our rights.”
Bridges was scanned both on a busy high street in December 2017, and while attending a protest in March 2018. South Wales Police is a prolific user of the technology – having used it more than 70 times to date – as part of an official Home Office-sponsored trial.
This represents the world’s first legal case against the use of facial recognition as a mass police surveillance tool. The September 2019 ruling found that although the South Wales Police’s use of facial recognition technology was lawful, it interferes with the privacy rights of everyone scanned by the technology.
Campaigners like Liberty warn that the technology should never be deployed on a mass scale, because it would usher in the end of anonymity in public spaces – something that is intimately tied to an erosion in privacy, rights and freedoms. Liberty has previously denounced automatic facial recognition as “arsenic in the water supply of democracy”.
There’s also the issue of discrimination. The facial recognition technology that the Metropolitan Police began to deploy at the start of the year is only accurate 19 per cent of the time, according to a study carried out by surveillance expert Peter Fossey at Essex University. Facial recognition technology is far more likely to be biased against people of colour, leading to the misidentification and potentially wrongful criminalisation of those affected.
In the interim period between the High Court case and today’s trial, awareness has grown about the risks of the technology. Prompted by the ongoing Black Lives Matter protests, some technology companies have taken a step back from facial recognition tech. IBM announced it would shelve plans for general use facial recognition technology (but not specific use); Amazon announced it would stop supplying police with the technology for a year; and Microsoft said it would ban police use of its technology until there is a stronger regulation for the technology.
“Around the world people are waking up to how dangerous this technology is,” said Liberty lawyer Megan Goulding. “While the big tech firms are addressing some concerns with moratoriums, these are short term solutions.
“We should be able to walk our streets and other public spaces without the threat of being watched, tracked and monitored, and the police should not be using any technology that is discriminatory and intrusive. Facial recognition technology is an oppressive surveillance tool that clearly threatens our rights.”