The California metropolis’ Board of Supervisors recently voted in favour of a ban on the use of facial recognition technology by all government agencies operating in the city of San Francisco, including police and airport security.
According to the San Francisco authorities, facial recognition technology just isn’t ready yet. But neither is the politics.
The move has been widely praised by civil liberties watchdogs nationwide and may be the first of many similar bans in the US. San Francisco’s neighbouring Bay Area city of Oakland has a similar ban up for review and several other US cities are considering their options.
In an almost unanimous vote earlier this week, the Board agreed that the technology as it currently stands is insufficiently accurate and regulated to ensure the citizens of San Francisco a standard level of protection from inaccuracy and bias.
Facial recognition technology was designed for surveillance. It maps human facial features to an existing database of photographic records to identify and track the movements of individuals.
Regulators worldwide have already voiced their concerns about the use of facial recognition as a means of building a tech-driven totalitarian state, hard-coded to override basic freedoms and data privacy. In the US, two Senators recently introduced bipartisan legislation to prohibit the use of facial recognition technology by businesses, without the individual consent of citizens.
Several leading figures in facial recognition development, such as Microsoft’s President Brad Smith, have recently gone on record saying they believe the technology should be regulated to uphold democratic freedoms and to safeguard against potential harms such as inaccuracy and discrimination. Last December, Google said it would stop selling its facial recognition surveillance product until the concerns and dangers surrounding it were properly addressed.
Facial recognition market researchers agree the technology has problems. Last January, a joint study was undertaken by the Massachusetts Institute of Technology and the University of Toronto found that Amazon’s facial recognition software – which has been on sale to police agencies for about a year – has a tendency to mistake women with darker toned skin for men, leading to much criticism and accusations of in-built bias.
The study found that Amazon’s technology misidentified light-skinned women seven per cent of the time, but darker-skinned women 31 per cent of the time. The technology had a far easier time identifying men, it seemed.
Nevertheless, facial recognition technology has already been put to use in the US and further afield, with “successful results”, depending on one’s point of view.
In June 2018, police agencies in the US used the technology to help identify suspects after the mass shooting at Annapolis, Maryland.
On the flipside, it has been reported that the technology is also in use on a 24-hour basis in China’s Xinjing province, where almost 2 million Chinese Muslims have been arrested and relocated to so-called “re-education camps” since 2017.
This article first appeared on Verdict, which is part of the same group as NS Tech and GlobalData