Peter Macdiarmid/Getty Images for Somerset House
show image

Oscar Williams

News editor

Nesta has unveiled a code of standards for deploying AI in the public sector

The innovation foundation Nesta has unveiled a new code of standards to ensure the public sector’s use of AI and algorithmic decision making is fair and transparent.

Nesta’s director of government innovation, Eddie Copeland, has outlined in the standard 10 principles public sector organisations should follow if they want to automate critical decisions.

“The application of AI that seems likely to cause citizens most concern is where machine learning is used to create algorithms that automate or assist with decision making and assessments by public sector staff,” Copeland said in a blog.

“While some such decisions and assessments are minor in their impact, such as whether to issue a parking fine, others have potentially life-changing consequences, like whether to offer an individual council housing or give them probation.”

The government has already set out a framework for ethical data science, which includes six principles for responsible applications. As Copeland notes, the EU’s upcoming General Data Protection Regulation also gives consumers the right to ask for an explanation about how an algorithmic decision about them was made.

But he asks if the existing guidance and regulation goes far enough, especially when it comes to public sector data: “While debate may continue on the pros and cons of creating more robust codes of practice for the private sector, a stronger case can surely be made for governments and the public sector.

“After all, an individual can opt-out of using a corporate service whose approach to data they do not trust. They do not have that same luxury with services and functions where the state is the monopoly provider.”

An abridged version of Copeland’s ten principles follows. The original version is available to read on Nesta’s website.

  1. Every algorithm used by a public sector organisation should be accompanied with a description of its function, objectives and intended impact, made available to those who use it.
  2. Public sector organisations should publish details describing the data on which an algorithm was (or is continuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases.
  3. Algorithms should be categorised on an Algorithmic Risk Scale of 1-5, with 5 referring to those whose impact on an individual could be very high, and 1 being very minor.
  4. A list of all the inputs used by an algorithm to make a decision should be published.
  5. Citizens must be informed when their treatment has been informed wholly or in part by an algorithm.
  6. Every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions.
  7. When using third parties to create or run algorithms on their behalf, public sector organisations should only procure from organisations able to meet Principles 1-6.
  8. A named member of senior staff (or their job role) should be held formally responsible for any actions taken as a result of an algorithmic decision.
  9. Public sector organisations wishing to adopt algorithmic decision making in high risk areas should sign up to a dedicated insurance scheme that provides compensation to individuals negatively impacted by a mistaken decision made by an algorithm.
  10. Public sector organisations should commit to evaluating the impact of the algorithms they use in decision making, and publishing the results.