The head of the Met Police, Cressida Dick, today said that concerns about live facial recognition tech are ill-informed and inaccurate, sparking a row with privacy campaigners.
The Met began to deploy the tech operationally earlier this month in the capital.
During a speech at the Royal United Services Institute, Dick declared herself a champion of “proportionate use of tech in policing”, and decried the fact that the “loudest voices” in this debate at present are critics who are “sometimes, highly inaccurate and/or highly ill-informed.”
Dick took it upon herself to “bust some pervasive myths” about the Met’s use of live facial recognition (LFR) tech. She said that the tech doesn’t store your biometric data and that human officers will make the final decision on whether or not to intervene.
Dick denied the tech had an ethnic bias, saying its only bias was that it was harder to identify a woman than a man.
The Met police has claimed that the live facial recognition tech is reliable in 70 per cent of cases, but an independent study carried out by surveillance expert Peter Fossey at Essex University found that this figure was 19 per cent. Fossey was hired by the Met to scrutinise two years worth of trial data. Dick glossed over this finding, with no reference to the accuracy of the tech.
Dick noted that to date, the technology has been used to arrest eight people, who she claimed probably would not have been intercepted otherwise.
On treading the line between security and privacy, Dick said it wasn’t up to the Met to decide where this should fall. However, she added: “Speaking as a member of the public, I’ll be frank. In the age of Twitter, Instagram and Facebook, concern about my image and those of my fellow law abiding citizens passing through LFR, and not being stored, feels much much smaller than my and the public’s vital expectation to be kept safe from a knife through the chest.”
Dick’s comments have triggered a backlash from human rights and privacy organisations.
Hannah Couchman, Policy and Campaigns Officer at Liberty, countered Dick’s claims that only people accused of serious crimes would be included in the database: “Anyone can be included on a facial recognition watch list – you do not have to be suspected of any wrongdoing, but could be put on a list for the ludicrously broad purpose of police ‘intelligence interest’. And even if you’re not on a watch list, your personal data is still being captured and processed without your consent – and often without you knowing it’s happening.”
Monitors of a trial deployment of the technology in 2018 said that a 14-year-old black schoolboy was fingerprinted after being misidentified outside the Westfield shopping centre in Stratford.
There are also concerns about the potential for wrongful targeting of those involved in activism. In January, it was reported that counter-terrorism police put the non-violent group Extinction Rebellion (XR) on a list of extremist ideologies that should be reported to the authorities running the Prevent programme, which aims to flag up those at risk of committing atrocities. “Gaffes” such as these stoke concerns that overzealous policing might be putting law-abiding citizens at risk of criminalisation.
Big Brother Watch’s director Silkie Carlo said: “The Commissioner is right that the loudest voices in this debate are the critics; it’s just that she’s not willing to listen to them. Her attempt to dismiss serious human rights concerns with life or death equations and to depict critics as ill-informed without basis only cheapens the debate.”
Parliament hasn’t yet introduced guidance on the potential security benefits of live facial recognition versus safeguards, the likes of which have been put in place for police use of fingerprints and DNA.
Dick said that the Met police welcomed government guidance on how the police can be lawfully empowered to use new tech.