Microsoft CEO Satya Nadella has warned that if governments fail to regulate facial recognition technology, they will pave the way for a “race to the bottom” in the marketplace.
The remarks, made in a speech at the World Economic Forum in Davos yesterday, mark the latest intervention by executives of the software giant, which sells the technology around the world.
Nadella said that Microsoft has “principles to build it and make sure [there are] fair and robust uses of the technology”, but that legislators have so far failed to play a part in its adoption.
Last year, the UK government published its 27 page biometrics strategy, more than four years after it was first promised. But, speaking to NS Tech, the chair of parliament’s science and technology select committee, Norman Lamb, said that it “in no way did justice to the fundamental issues involved, particularly around loss of privacy.”
“We urgently need clear regulations in place to ensure that biometric technologies are well-governed, yet the Government’s biometric strategy does not propose any legislation to provide rules for the use and oversight of new biometrics,” he added.
One of the biggest concerns about the technology is that governments will use it to surveil citizens’ movements. Last month, Microsoft’s president Brad Smith – one of the most high profile pro-privacy advocates in the US – warned that “a government could use facial recognition technology to enable continuous surveillance of specific individuals”.
“It could follow anyone anywhere, or for that matter, everyone everywhere,” he added. “It could do this at any time or even all the time. This use of facial recognition technology could unleash mass surveillance on an unprecedented scale.”
Another major concern about the technology is that it is much more likely to misidentify women and people of colour, increasing the chances of minority groups being unduly targeted by law enforcement authorities.
“This [issue] makes it especially important that Microsoft and other tech companies continue the work needed to identify and reduce these errors and improve the accuracy and quality of facial recognition tools and services,” Smith said. “This work is underway, and we’re making important progress. It’s equally critical that we work with customers closely to ensure that facial recognition services are deployed properly in ways that will reduce these risks.”
The solution to these issues, Smith claims, is to ensure that any deployment of the technology is fair, transparent, accountable, non-discriminatory, consensual and lawful. “Given the early stage of facial recognition technology, we don’t even know all the questions,” said Smith. “But we believe that taking a principled approach will provide valuable experience that will enable us to learn faster.”
Last May, the UK’s data protection regulator threatened to take legal action over the use of facial recognition technology by police forces across the UK. In a blogpost, Elizabeth Denham challenged the legality and effectiveness of the technology, which was rolled out at last year’s Notting Hill Carnival and the Champions League final in Cardiff.
“For the use of FRT to be legal, the police forces must have clear evidence to demonstrate that the use of FRT in public spaces is effective in resolving the problem that it aims to address, and that no less intrusive technology or methods are available to address that problem,” she wrote.
The Information Commission has identified the issue as a priority area for her team, writing to the Home Office and the National Police Chiefs Council setting out her concerns. “Should my concerns not be addressed I will consider what legal action is needed to ensure the right protections are in place for the public.”