Dan Kitwood/Getty Images
show image

EU softens plans for facial recognition regulation, sparks privacy backlash

The European Commission has come under fire from privacy experts after withdrawing proposals to issue a moratorium on facial recognition technology, and instead mooting plans to force firms to train their algorithms on European user data.

The initiative is detailed in a series of proposals designed to encourage the adoption of “responsible” artificial intelligence across the EU. But critics have warned that the measure, aimed at reducing bias and error, fails to address fundamental concerns about the threat the technology poses to citizens’ privacy.

Under president Ursula von der Leyen, the Commission is expected to publish proposals on Wednesday morning (19 February) for restoring Europe’s technological sovereignty. They will cover a range of issues, from algorithmic bias to data privacy and the EU’s perceived lack of competitiveness on AI. But it appears that officials are softening their approach to facial recognition regulation.

Last month, the Commission revealed it was considering a five-year moratorium on the use of facial recognition in public spaces. But amid scepticism about how the move would be received by member states, many of whom have already rolled out live facial recognition technology, it dropped the plans less than two weeks later.

Weighing in on the debate earlier this week, the Commission’s digital czar Margrethe Vestager said that live facial recognition may be unlawful under GDPR, given that citizens do not consent to its use. But she added that there are exemptions covering public security, meaning law enforcement agencies can roll it out legally.

Now the EU has dropped plans to issue a ban, although many had already cast doubt on the sincerity of the proposal given that the Commission largely refrains from issues such as national security and policing.

Instead, it is taking a different tack. “According to a recent draft of the EU document, companies could have to retrain their systems with European data sets if they can’t guarantee the facial recognition or other risky technology was developed in accordance with European values,” Bloomberg reported.

One of the key criticisms of facial recognition in recent years is that it misidentifies women and people of colour at a higher rate, and it appears the plans could be designed to reduce the risk of false positives in European deployments.

But, speaking to NS Tech, Dr Michael Veale, a lecturer in Digital Rights and Regulation at UCL’s Faculty of Laws, said “the core problem with facial recognition is not bias, but power”. He added: “Society accepted — reluctantly at times — CCTV on the grounds that the recordings were generally low resolution, not retained, and certainly not interlinked. Facial recognition and similar technologies like automated lipreading transform existing infrastructures into ones which identify individuals and listen to conversations.

“Turning the conversation about facial recognition to one about bias in accuracy implies that the aim is a perfect and complete surveillance regime, which can be co-opted to identify and persecute anybody, with all the bias imaginable.”

Speculating on why the Commission might be softening its approach, Veale said: “A moratorium on facial recognition would only be feasible and additional to the GDPR if it applied to the police, and laws that apply to domestic policing are much more difficult to negotiate and pass at European level. I suspect there was significant pushback from member states.”

Law enforcement agencies in a number of EU member states have already rolled out automated facial recognition (AFR). Cardiff High Court ruled last year that South Wales Police was justified in rolling out AFR across the Welsh capital in 2017 and had complied with data protection and human rights laws.

Veale added: “A moratorium on policing by law enforcement wouldn’t be a bad thing; the GDPR, interpreted properly without any questionable derogations by member states, already forbids the use of facial recognition in private sector surveillance.

“However we should ask bigger questions: does policing really need facial recognition? There are so many other ways to reduce crime that do not threaten privacy and dignity. Is this really about reducing crime, or being seduced by a vendor promising a magic button to fix austerity at the cost of dignity and human rights?”

Anna Baccarielli, a researcher at Amnesty International, said “This is an opportunity for the EU to take the lead on protecting rights in response to rapid technological developments, as it did with the GDPR. The apparent recommendation to retrain existing systems on European datasets seems to fundamentally misunderstand the serious threats to privacy and other rights posed by facial recognition systems. We need to stop, pause use of facial recognition tech and address the huge underlying human rights risks rather than tinkering with training sets.”

Privacy groups fight back after judge deems police facial recognition lawful