A new report from the ‘independent’ Centre for Data Ethics and Innovation (CDEI) body – set up in 2018 by the government to advise on technology policy – has published a new report providing analysis of how artificial intelligence is being deployed across five sectors.
The report – the result of consultation with more than 100 stakeholders across industry, academia, civil society and government – concludes that there are a number of “harder to achieve” opportunities for AI that carry huge potential benefits, but which are unlikely to be realised without coordinated government support, a national policy and large, complex data sets to train models on. Another barrier flagged by the report is a lack of transparency around the use of AI – a similar conclusion to that reached earlier this year by the Committee for Standards in Public Life, whose chair is Lord Evans, a former spy chief who controversially defended MI5’s work with foreign agencies that tortured suspects.
Panellists rated the inherent risks for the impact of AI across different sectors. Unsurprisingly, “bias leading to discrimination” was rated highly across criminal justice, financial services, health and social care, digital and social media. Energy and utilities was the only sector where this was considered “medium” risk. “Lack of explainability” was also rated as high risk across all but one sector. Generally rated as low to middling risk were “Low Accuracy” and “Loss of trust in AI”.
However, experts have critiqued the report’s utility – and particularly its professed independence. “Business interests are well represented; but this report also doesn’t really say anything new or radical,” said Sam Smith, policy lead at Medconfidential, an advocacy group for health data privacy.
“In the digital services session for this many of the participants […] who I think this survey comes from were industry,” tweeted lecturer in digital rights and regulation at UCL Michael Veale, who said his table consisted of representatives from Nesta, the Cabinet Office, Facebook and the IAB.
Smith said he considered the part of the report covering facial recognition the most interesting – for which there is an extensive analysis of harms, including loss of privacy in public spaces – but that nevertheless, CDEI is “much more pro-use than the big tech companies at this point”. Civil liberties activists argue that the potential human impact of facial recognition technology is so catastrophic that it should simply be shelved forever.
Meanwhile Medconfidential’s Phil Booth tweeted that “barriers” was “a terribly negative framing, which makes #protections & #GoodPractice sound like deliberately erected impediments”.
The government refers to the CDEI as an independent body, but Smith disputes this, pointing out that the group is now based in an office run by Public, a venture capital fund that seeks to help startups “transform the public sector”.
CDEI also faced criticism this week for its controversial embrace of immunity passports. The group published an industry-friendly report concluding the technology could “prove valuable” in the months ahead, directly contradicting advice from the WHO.
“It’s basically a fig leaf for whatever government thinks it might want to do with data,” says Smith.