KENZO TRIBOUILLARD/AFP via Getty Images
show image

Risking everything: where the EU’s white paper on AI falls short

In late February, the European Commission released its long-awaited white paper on artificial intelligence (AI). The paper is seen as the world’s first pan-national attempt to regulate AI and forms part of the European Union’s grand plan for regulating technology over the next decade.

As such, the paper speaks volumes to the future of AI regulation as the EU often sets the tone for global tech-regulation debates. Yet, it also missed a number of crucial opportunities.

The EU wants to position itself as the responsible alternative to corporate surveillance capitalism and authoritarian state control. However, instead of addressing the existential challenges that AI raises – from injustice to new imbalances of power – the document suggests that years of tech lobbying and pressure from member-states has paid off. A truly progressive technology policy looks different.

It begins with the “high-risk”-based approach to regulation, which limits new rules to specific use-cases within a limited number of sectors that are deemed “high-risk”. This approach, like the data driven systems it aims to regulate, assumes that it is possible to finitely calculate risk.

As a result, many AI applications with far-reaching societal consequences fall outside the scope of the regulatory proposal. For example, data brokers that use AI to predict people’s identities and interests or the many ways in which AI is used to target advertising today, furthering corporate surveillance and entrenching racist and sexist stereotypes, remain unaddressed.

See also: Exclusive: government blocks full publication of AI review

Not every application of AI needs in-depth scrutiny. But the current approach assumes that high risks can simply be mitigated, and that people are already protected from low-risk applications. This ignores that there are applications and uses of AI systems that are fundamentally incompatible with fundamental rights, which merit moratoriums or bans (we just learnt that EU police are pushing for a pan-European facial recognition network). It also overlooks that what is low risk for many, could be very risky for some, as those already marginalised are often disproportionately affected by whatever harms technology amplifies.

A progressive approach to regulating AI would recognise that government uses of new technologies – be it for mass surveillance, policing, or the welfare state – inherently risk violating human rights and undermining civil liberties. Last year the UN rapporteur on extreme poverty produced a devastating account of the “digital welfare state”, which argued that new digital technologies are deteriorating the interaction between governments and the most vulnerable in society. A recent court judgement in the Netherlands ruled that an automated surveillance system for detecting welfare fraud violated basic human rights. Even in low-risk situations, encouraging spending tax money on proprietary systems that are hard to audit, whether for fairness or accuracy, is simply irresponsible.

See also: Will the government’s new AI procurement guidelines work?

What makes the current white paper disappointing is that earlier, leaked drafts contained a number of sensible proposals, including a moratorium on facial recognition and special rules for the public sector. In the final proposal, it seems that much ground was pre-emptively ceded in these important debates. The high-risk approach, in particular, is overly optimistic. As long as we do not have real enforcement of existing rules like the GDPR, and crucial laws like the ePrivacy Regulation are stuck in a legislative limbo, people will not be protected.

The elephant in the room is the context in which AI is being developed and applied. This context is a tech industry that is dominated by a handful of companies, and a business model that’s premised on the exploitation of people’s data. A truly progressive tech policy needs to tackle the root causes of surveillance capitalism, ensure that those who are most vulnerable are truly protected, while paving the way forward towards real alternatives. That takes more than calculated risk reduction.

Corinne Cath-Speth is a PhD researcher at the Oxford Internet Institute and the Alan Turing Institute. Frederike Kaltheuner is a public interest tech policy fellow with the Mozilla Foundation. 

The government’s approach to algorithmic decision-making is broken: here’s how to fix it