The debate around the ethical use of facial recognition (FR) continues to gather momentum. A temporary ban on the use of FR in public spaces is described in a draft EU White Paper on Artificial intelligence (AI) as one of the options for a future regulatory framework. It is unlikely that such a moratorium will be put in place any time soon, but Brussels is leading a discussion around FR that goes beyond the technology itself and focuses on the need to empower individuals in the digital era and limit the potential for AI systems to make inappropriate decisions.
FR, which makes decisions based on image interpretation, poses risks to fundamental rights, including discrimination and privacy. FR training data is often incomplete or unrepresentative of the general population. Therefore, FR based on that information will contain inherent biases. While human activity is also prone to biases, AI systems operate at scale and are not subject to any social control mechanism. A study from MIT Media Lab showed that the accuracy of FR technology differs across gender and races. The darker the skin, the more errors arise, up to 35 per cent for images of darker-skinned women. There are significant risks that FR used in law enforcement and border, airport, and retail security will be unreliable.
The use of live FR technology involves the processing of personal data, specifically biometric data, to accurately identify an individual. As such, data protection law, like the General Data Protection Regulation (GDPR), applies whenever FR is used. Under GDPR, personal data can only be processed if explicit consent has been given by the individual whose data has been collected. However, it is unlikely that individuals, including those not on a watchlist, will ever be asked to provide consent where FR is used for law enforcement purposes. Data watchdogs, like the Information Commissioner’s Office (ICO), note that law enforcement bodies should only process data when it is necessary for a specific task and proportionate to that aim.
Advances in processing power, edge computing, 3D technology, and machine learning have allowed the leading software companies to expand their FR offerings, reducing costs and accelerating mass adoption. The technology is everywhere and it is relatively cheap, to the extent that use cases are moving from security and law enforcement to businesses. A growing number of shops, for example, are using FR technology to not only identify thieves but also to track customers. For traditional retailers, FR provides a useful tool to track customer behaviour in-store, as well as a powerful, albeit intrusive, way to protect their merchandise.
Another issue is the black-box nature of AI systems, and specifically of machine learning algorithms, which are inherently complex and generate results that can be difficult to explain. In particular, the way machines learn, focusing on specific signals and information, has become increasingly opaque. This might result in safety issues as well as difficulty in attributing liability. For example, if an autonomous vehicle fails to recognise an object on the road and causes an accident, enforcement agencies might be unable to determine why automated decisions were taken.
At stake in FR discussions is also the ability to influence global standards in adopting new technology. China’s clout in international forums has grown in recent years, as shown in the negotiations around 5G specifications. Chinese companies are lobbying for their standards to be adopted for FR, video monitoring, and city and vehicle surveillance. Beijing’s efforts clash with Brussels’ aspiration to lead the way in regulating AI and ensuring that technology “is developed and used in a way that respects European values and principles”. Europe’s rationale is that the risks that might result from rushing the roll-out outweigh the potential for stifling innovation.
That said, the EU’s proposed ban wouldn’t come into effect overnight, given the broad agreement required among member states. Some of them, including France and Germany, are already planning an FR roll-out in public spaces in the short term, which suggests that they wouldn’t be keen to limit applications. London’s Metropolitan Police has announced that it will deploy live FR across the capital, despite concerns voiced by regulators and human rights campaigners. It is also worth noting that, in the draft white paper, a moratorium is only one of a range of options considered. Others include a voluntary ethical code and mandatory risk assessment for high-risk applications, like healthcare and transport.
Laura Petrone is a senior analyst at GlobalData Thematic research. GlobalData’s computer vision thematic report can be found here.
NS Tech and GlobalData are part of the same group.