The Met Police has communicated – via a quietly appended paragraph on its website – its intention to continue using facial recognition technology in the wake of a landmark court case that found South Wales Police’s use of the tech was unlawful.
New text on the Met Police’s site says that the force is aware of the Court of Appeal’s judgement that was handed out on 11 August, but that “the Met’s approach to live facial recognition is different to the South Wales Police cases which were appealed.”
It claims that this is due to the “different crime issues” that affect London compared to South Wales. “Differences include that the Met has been clear that our use of this technology is intelligence-led, and focused on helping tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and helping protect the vulnerable,” the site reads. “We tell the public about each use of facial recognition technology in advance of, during and afterwards.”
The site claims that the Met uses “the latest accurate algorithm” as well as their own policy documents (as opposed to those used by South Wales Police).
“We will carefully consider the judgment and act on any relevant points to ensure that we maintain our commitment to use facial recognition in a legal, ethical and proportionate way,” it says.
The case against South Wales Police, brought on behalf of Ed Bridges by civil rights group Liberty, was the first successful legal challenge against police use of facial recognition tech. The case was at first thrown out by the High Court in September 2019, but the appeal succeeded on three out of five counts.
It succeeded on the grounds that there was insufficient guidance on where the automatic facial recognition technology AFR Locate could be used and who could be put on a watch list.
It also found that South Wales Police didn’t comply with Public Sector Equality Duty (PSED) under section 149 of the Equality Act 2010, because it didn’t carry out due diligence on whether the technology exhibited a racist or sexist bias or not.
Liberty has rebuffed these claims. “Intelligence led? Met has 100s on watch lists. Only one arrest. Arrest openly celebrated in the street and arrested woman was immediately bailed the next day,” the group tweeted. “Serious violence? Met has used [facial recognition] to track people with mental health problems. It’s time to ban it.”
In its judgement, the Court of Appeal wrote: “We would hope that, as AFR is a novel and controversial technology, all police forces that intend to use it in the future would wish to satisfy themselves that everything reasonable which could be done had been done in order to make sure that the software used does not have a racial or gender bias.”
An independent analysis of Met Police facial recognition technology carried out by surveillance expert Peter Fossey at Essex University found that the technology was only 19 per cent accurate.
A study published late last year by a US government agency found that facial recognition systems used globally, including in the UK, are racist and sexist and can lead to the wrong people being arrested.
Both the South Wales Police and the Met use NEC’s NeoFace Live Facial Recognition technology. The former first began trialling the technology, and the Met followed suit in early 2020, scanning the faces of passers-by on Oxford Street.
The privacy campaign group Big Brother Watch has said that its research shows Met Police’s watch lists contain “not only wanted suspects, but also political protestors, football fans, and innocent people with mental health problems who have no criminal history”.
Of the Met Police’s continued commitment to facial recognition technology “in a legal, ethical and proportionate way”, Big Brother Watch said it “maintains [its] commitment to BAN live facial recognition surveillance in a legal, ethical and proportionate way”.