Technological developments can break down all sorts of boundaries, distinctions and seperations. The most recent of these ruptures has led to a battle over who owns our faces.
It has been reported that Twitter has demanded that the AI company Clearview stops harvesting faces from user profiles. The reports suggest that these openly available images have been captured and stored for use in the development and deployment of facial recognition technology. Twitter want this to stop and the harvested pictures to be deleted.
Understandably facial recognition has been attracting quite a bit of coverage in recent months. From its use in CCTV and street monitoring through to its inclusion in internet search functions, its possibilities are escalating and are provoking new concerns.
When we think of data we tend to think directly of numbers, symbols and the cloud. But images and sound can be data too, or they can at least be turned into data. This ix why we are seeing tech companies seeking to expand into increasingly popular speech controlled units for the home. Sonic data is an area of expansion, it creates new possibilities for insight and targeting.
Like the sonic, the visual also provides a range of opportunities for data harvesting. Concerns over facial recognition are tied into this. Facial data seem to provoke a stronger response than other types of data. Perhaps this is because it feels more surveillant than other data.
Surveillance is most often understood in visual terms – it provokes a fear of being watched. Facial data is most easily associated with an oppressive vision of surveillance. Facial recognition has panoptic qualities. It brings with it the type of accounts of surveillance in Michel Foucault’s adaptation of the panopticon prison – his book Discipline and Punish famously sought to understand the way we react to the potential for being watched. It also clearly, has Orwellian properties – Big Brother is watching. And more broadly the face is a visual thing and so, as a form of data, more readily fits into and stimulates the worries we might have over being under surveillance. This is possibly why there appears to be a heightened sensitivity to our faces being turned into data for use in facial recognition than seems to be the case with other data forms.
Facial recognition itself is an extension of biometrics. These are measures of the body that are often used for identification but are also found in other ways that we measure our bodies for different purposes. Wearable devices and mobile apps, for instance, have facilitated new developments in biometrics.
The sociologist Btihaj Ajana has explored the many ways that power is exercised over our bodies through data in her excellent book Governing Through Biometrics. Stretching from things like finger-printing to DNA anaylsis, facial recognition is part of this history of governing through biometrics. Amongst the types of data facilitated surveillance, facial recognition, as a biometric, is more obviously about control over what Nikolas Rose, in different circumstances and influenced by Foucault, has called “the power over life itself”. Facial recognition technology feels like a particularly invasive and direct biometric.
There are probably two reasons why Twitter has reacted strongly to the potential use of its profile images for facial recognition.
The first is fairly obvious. As with all tech and social media companies, data are a key asset. So they don’t want their data to be out of their control. It would undermine any social media company’s business model to give away data.
Second is fear. Social media companies are concerned that their brand will be tarnished if the surveillance potentials of its data are made too obvious to users. Such obvious potential for surveillance is likely to deter users. The changing social media ecology could mean that social media platforms will become more concerned about losing users to other platforms.
As a result of this fear, social media companies are likely to be cautious about how we feel about surveillance and about how obvious this surveillance might be to its population of participants. They will inevitably try to keep it within the realms of acceptance – whilst also trying to change what we see as acceptable. When it comes to what Shoshana Zuboff calls ‘surveillance capitalism’, the management of perceptions is important in maximising data extraction. If people feel there is too much going on, or too much of the wrong type of surveillance, then data extraction will diminish.
Plus, of course, in this case Twitter will see it as doing their image no harm if they appear to be defending our right to privacy.
Dave Beer is a professor of sociology at the University of York