Governments which fail to strengthen procurement rules for AI-powered systems risk eroding public trust in the technology and their own institutions, one of the UK’s leading internet experts has warned.
Speaking at the AI@Oxford conference on Tuesday, Professor Philip Howard, the director of the Oxford Internet Institute, claimed that AI could transform the way some public services are delivered, but that suppliers must be held to strict rules.
The warning comes as the government is reviewing prospective applications of AI across Whitehall and the public sector. Ministers hope the technology could be used to partially automate some areas of bureaucracy, such as inspections of schools and hospitals.
But Howard warned that public trust in governments will increasingly be linked to how they use artificial intelligence, and that those which do not strengthen the rules governing how the technology is procured risk eroding faith in the state itself.
“At the moment, industry can hide [to some extent]; we don’t fully understand the outcomes [AI reaches] and we don’t fully know where the data comes from,” he told delegates. “If there were procurement guidelines that said you must explain where all the data comes from and be able to account for how you purchased it and why […] that would clean up an enormous amount of the process that goes into AI.”
Howard clarified that he is not in favour of AI-specific legislation and would instead favour governments assessing how the technology complies with existing industry regulations, as well as human rights and data protection laws. “My instinct is not to legislate in innovation domains like this mostly because in most democracies we already have the guidelines that industry and government should be following.”
The government is expected to publish the findings of its AI review in the coming months.