Staff at London’s Met Police and the National Crime Agency (NCA) have carried out hundreds of searches on Clearview AI’s controversial facial recognition platform in recent months, according to documents seen by BuzzFeed News.
The news organisation, which has obtained a hacked copy of Clearview’s user list, reported that half a dozen British police forces have used the service.
Clearview faced a fierce backlash last month after the New York Times revealed it had scraped billions of photos from the web, including Facebook and Instagram, in order to create a vast facial recognition database. Clearview said at the time that 600 law enforcement agencies around the world were using the platform.
London’s Met Police had told the Metro newspaper that it hadn’t used Clearview’s services, but BuzzFeed reported that the hacked documents showed “a number of [Met] users” have performed “more than 170 searches” since December. The National Crime Agency is reported to have carried out more than 500.
In a statement, the Met told BuzzFeed it was not using the platform to power its recent facial recognition roll-out in the capital. The technology, which has sparked a row with privacy experts, has been supplied by the Japanese firm NEC, according to the Met. The NCA said it could not comment on which tools it uses for investigations.
Clearview has said there are “numerous inaccuracies” in the document, but has refused to provide further details, citing an ongoing investigation into the hack.
Anna Barcciarelli, a researcher at Amnesty International, told NS Tech that the revelations were “extremely worrying”. She added: “According to media reports, the Metropolitan police used Clearview tech but denied it – and yet expect us to trust their use of facial recognition technology on London streets.
“The Met needs to pause all use of facial recognition technologies until the significant human rights risks are addressed.”
The Met and Clearview did not respond to a request for comment in time for publication.