LIONEL BONAVENTURE/AFP/Getty Images
show image

Corinne Cath

Doctoral student at the Oxford Internet Institute.

Who is driving the AI agenda and what do they stand to gain?

From the critical, like law enforcement, healthcare, and humanitarian aid, to the mundane, like dating and shopping, artificial intelligence (AI) seems to be the answer to all our problems. AI is a catch-all phrase for a wide-ranging set of technologies most of which apply learning techniques from statistics to find patterns in large sets of data and make predictions based on those patterns.

It seems like there are meetings every other week, organised by representatives from industry, government, academia, and civil society to address the perils of AI and formulate solutions to harness its potential.

But who is driving the regulatory agenda and what do they stand to gain? Cui Bono? Who benefits?

This question needs to be answered because letting industry needs drive the AI agenda presents real risks. With so many digital giants like Amazon and Facebook housed in the US, one particular concern regarding AI is its potential to mirror societies in the image of US culture and to the preferences of large US companies, even more than is currently the case.

AI programming does not necessarily require massive resources. Much of its value is derived from the data that is held. As a result, most of the technical innovation is led by a handful of American companies. As these companies are at the forefront of many regulatory initiatives, including those in Europe, it is essential to ensure this particular concern is not exacerbated.

AI systems are presented as very complex and difficult to explain, even for the technically ordained. The merits of those arguments aside, companies and governments alike use this reasoning to justify the deep involvement of the AI industry in policy making and regulation. And it’s not just any industry players that are involved, but the same select group that is leading the business of online marketing and data collection.

This is no coincidence. Companies like Google, Facebook, and Amazon sit on troves of data, which can be turned into the feeding material for new AI-based services. The ‘turn to AI’ thus both further consolidates their market position and provides legitimacy to their inclusion in regulatory processes.

A related concern is how much influence these companies have over AI regulation. In some instances, they are invited to act as co-regulators. For example, after the Cambridge Analytica scandal, Facebook CEO Mark Zuckerberg testified before a joint-hearing of the US Senate Commerce and Judiciary Committee about his company’s role in the massive data breach. During the hearing, he was explicitly asked by multiple Senators to provide examples of what regulation for his company should look like.

Closer to home, the European Commission recently appointed a High-Level Expert Group (HLEG) on AI. The group is mandated to work with the Commission on the implementation of a European AI strategy. The group’s 52 members hail from various backgrounds and, while not all affiliations are apparent, it seems that almost half of the members are from industry; only four are from civil society. Similarly, the UK government recently appointed three experts to advise the newly created Government Office of AI, one academic and two industry representatives.

The influence that corporations have to set the agenda for AI regulation is also visible in the creation of various industry-initiated initiatives on AI and ethics. Recent examples are the ‘Partnership on AI’ (PAI) set up by seven large tech companies (Amazon, Apple, Deepmind, Google, Facebook, IBM, Microsoft) and the Institute of Electrical and Electronics Engineers’ (IEEE) ‘Global Initiative on Ethics of Autonomous and Intelligent Systems’.

Much can be said in favour of such open norm-setting venues that aim to address AI regulation by developing technical standards, ethical principles, and professional codes of conducts outside of the dab-and-drag of regulatory processes. But again, we must ask: Cui Bono? The solutions presented by these initiatives are often framed in terms of ethical frameworks or narrow solutions that lead to fair, accountable, and transparent AI. But they don’t address questions of hard-regulation or the internet’s business model of advertising and attention. If we are serious about AI regulation, then we must discuss those topics in these forums as well.

How AI systems function and, by extension, what regulatory problems they raise is highly contextualised. A US-driven commercially-driven agenda is naturally going to be an awkward fit for much of the rest of the world. For instance, the EU has very different privacy regulations than the US.

It is hard to find anyone who will, openly, argue against the need for fair, accountable, and transparent AI. Yet, we must remain cognizant of the concerns not, or only partially, covered by these catch-phrases. In focusing on narrowly defined conceptualisations of fairness, accountability, and transparency, what are we leaving behind?

Are we assuming that issues around AI and equity, social justice, or human rights are automatically caught by these popular acronyms? Or are these concerns out of scope for the organisations pushing the agenda? Asking these hard questions matters because these concepts are increasingly making their way into regulatory initiatives outside of the US.

So, how can we address these concerns? One way would be to ensure that there is equitable stakeholder representation when regulating AI. Another would be to start more Europe-based AI initiatives like is being done by AI4People and the Council on Europe’s Expert Committee on AI and Human Rights. Equally, while it is good to have more Europe-led initiatives, this still means we are not sufficiently hearing the concerns of the Global South. Those voices are especially relevant, as their countries are often used as ‘test-beds’ for technology that will be rolled out across the rest of the world.

Similarly, it is important to go beyond the fairness rhetoric and start to formulate what other fundamental values should be included. The 2018 AI report of the European Group on Ethics in Science and New Technologies (EGE) is a step in the right direction, as it explicitly includes concepts like justice, equity, and solidarity as well as the rule of law.

It should be evident, however, that just making sure that there is a civil society representative or academic in the room for every industry representative is not enough. Even in cases where various stakeholders are invited to join the regulatory process in equal numbers, there is no internal equality. Corporations simply have more resources to dedicate towards such processes. To ensure we all benefit from these technologies we need to guarantee a diverse set of concerns and values are represented – equitably – when setting the regulatory agenda for AI.

Corinne Cath is a doctoral student at the Oxford Internet Institute. Her research focuses on the politics and ethics of Internet governance and the management of the internet’s infrastructure.