The Office for AI has published its draft guidelines for the procurement of AI systems across the public sector. This is the latest in a series of moves by the UK government, such as the Digital Ethics Framework and Public Sector AI Guide, to set out how AI should be used in the public sector. Many, including the Oxford Internet Institute’s Philip Howard, had called for procurement rules around AI to be strengthened.
The UK is the first government to pilot these guidelines, but they are intended to be rolled out globally, having been developed in collaboration with the World Economic Forum. Bahrain is also expected to trial the guidelines in the coming months.
Why do we need special guidelines for AI anyway? Government procures goods and services all the time, so what makes AI systems so different? There are three good reasons why current procurement guidelines don’t sufficiently address all the challenges in acquiring AI systems:
1. The market is fast-moving and immature with technology that is uncertain and rapidly iterating.
2. That there is a lack of standards, both in accrediting products and in draft contracts that balance risk and innovation.
3. Extra consideration is required for the ethical use and technical robustness of the AI systems.
So how do the guidelines stack up? They are a self-admitted work in progress but there’s already a lot to admire.
One important point emphasised at the start of the guide is a warning against technological solutionism; the view that, given the right code, technology can fix a political or social problem, no matter how complex. The guidelines instead emphasise focusing on specifying the problem rather than the solution and encourages users to set out why AI is relevant to the problem and be open to non-AI based solutions.
Another issue which the guidelines cover, but which is often missed, is asking users to consider the life-cycle management of the AI system and to consider during procurement that acquiring a tool with AI is not a one-time decision. AI isn’t just for Christmas and the real world is a moving target.
Instead, you need to be continually monitoring effectiveness over time. What might have been a highly accurate and unbiased system before is unlikely to remain entirely so, not least because those encountering the AI-augment system will invariably optimise for little quirks of the system as much as the goal intended by those procuring it, which may be especially opaque at deployment given the black box nature of many current AI systems.
Finally, the guidelines require a focus on mechanisms of accountability and transparency throughout the procurement. The recent controversies over the use of facial recognition, especially by the police and in public places like London’s King’s Cross, have highlighted how important being transparent is in retaining trust.
However, as good as the guidelines are, I’m sceptical they will be as effective as hoped. Clear instructions will go some way to help the genuinely confused or uncertain. But other reasons driving a lack of accountability, transparency, and maintenance in some government applications of AI haven’t gone away.
The Bureau of Investigate Journalism has found that the development of algorithmic and data-driven systems in the public sector is frequently driven by the pressures of austerity to do more with less. Many applications of algorithmic tools in the public sector come from local councils, who’ve used them to inform health commissioning decisions or predict which children are at risk of neglect or abuse. The force of austerity is particularly strong for local authorities, who’ve had their budgets slashed in recent years; English councils have seen a 21 per cent fall in spending between 2010 and 2018.
Many procuring and deploying AI systems in the public sector simply do not have the time, the money or the incentives to properly oversee, audit or maintain these systems beyond the immediate goals of saving money and delivering in the here and now. Guidelines are all well and good, but they won’t be effective if the teams deploying AI are under-resourced and don’t have the capacity to comply.
The Office for AI and the World Economic Forum also hope that setting guidelines in AI procurement will have a significant impact on setting norms through the AI provider industry. Government contracts represent one of the single biggest buyers in many markets, and the cost of differential standards for government and non-government services is likely to prove more burdensome than raising ethical and technical standards across the board. Ultimately, this might be where the guidelines have their most impact in the long-run.
Elliot Jones is a researcher at Demos. He is interested in how people communicate about long-term risks online and how technology can improve and individualise economic policy. Follow him on Twitter @Elliot_M_Jones