Eric Piermont / AFP / Getty Images
show image

Oscar Williams

News editor

Google reveals four step approach to curbing terrorist propaganda

Google has unveiled a four step approach to curbing the spread of extremist content in the wake of a spate of terror attacks in the UK.

The move comes as tech companies face mounting pressure from governments and advertisers to combat extremism on their platforms.

Under plans drawn up with France, Theresa May has vowed to introduce fines for firms that fail to quickly remove the offending content from their platforms.

In a blogpost, Google’s general counsel Kent Walker wrote that the firm already has thousands of employees reviewing content, but acknowledged that more needs to be done.

“First, we are increasing our use of technology to help identify extremist and terrorism-related videos,” said Walker.

The legal chief pledged to task more engineers with developing ways to swiftly remove extremist content uploaded to YouTube.

“We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months,” said Walker.

“We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content.”

The tech giant is also inviting more independent experts from organisations working on issues such as hate speech, self-harm and terrorism to join YouTube’s Trusted Flagger programme.

“While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern,” said Walker.

Over the coming months, the programme will expand from 63 expert organisation to 113, all of whom will be supported by operational grants.

Google is also seeking to make it harder to find videos that contain inflammatory religious or supremacist content but do not violate its policies.

Viewers will be presented with warnings before viewing such content on YouTube. It will not be monetised, recommended or eligible for comment or user endorsements.

“We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” said Walker.

Finally, YouTube will also ramp up its counter-radicalisation efforts, working with Jigsaw to redirect potential Isis recruits to anti-terrorist videos via online advertising.

“In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages,” said Walker.

Google’s announcement comes days after Facebook revealed how it uses artificial intelligence to curb the spread of extremist content.

Last month, the social media site vowed to hire 3,000 more staff to tackle extreme and distressing content, particularly in videos.

Both Facebook and Google have said that the majority of extremist content is now identified by automated analysis systems before it is flagged by users. But while artificial intelligence can assist with the identification of terrorist propaganda, the technology isn’t yet able to reliably distinguishing between the context of different videos.

“This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user,” said Walker, adding: “Technology alone is not a silver bullet.”