Loic Venance/AFP/Getty Images
show image

Oscar Williams

News editor

Facebook reveals how it uses artificial intelligence to tackle extremist content

Facebook has revealed how it uses artificial intelligence to remove extremist content from its platforms.

The announcement comes as tech companies face mounting pressure over their handling of terrorist propaganda.

In a series of blog posts, senior executives at Facebook explained their attempts to remove extremist content before users see it.

“Already, the majority of accounts we remove for terrorism we find ourselves,” wrote Facebook policy chief Monika Bickert and policy manager Brian Fishman.

“But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook.”

Theresa May announced earlier this week that the government is drawing up proposals to fine sites that fail to swiftly remove extremist content.

Facebook revealed a number of strategies for targeting terrorist propaganda using artificial intelligence.

The first is image matching, which lets Facebook flag images or video when they are uploaded if they have previously been identified as extreme.

In December last year, Facebook, Twitter, YouTube and Microsoft launched a share database to crack down on the sharing of extremist content.

It lists extremist content which each site has removed, letting others swiftly identify and remove the same content on their platforms.

Facebook announced that it was also starting to experiment with using artificial intelligence to analyse text that may advocate terrorism.

“That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts,” wrote Ms Bickert and Mr Fishman. “The machine learning algorithms work on a feedback loop and get better over time.”

The policy managers said artificial intelligence also analyses material related to accounts that have been identified as supporting terrorism, and locks down fake accounts created by repeat offenders.

Last month, the social media site vowed to hire 3,000 more staff to tackle extreme and distressing content, particularly in videos.

The site is facing pressure over its handling of violent videos, as well as extremist content.

In May, footage of a Thai man hanging his 11-month-old daughter before killing himself was broadcast on the site, and remained online for about 24 hours before Facebook’s moderators removed it.

A Facebook spokesperson said at the time: “This is an appalling incident and our hearts go out to the family of the victim. There is absolutely no place for acts of this kind of Facebook and the footage has now been removed.”

CLOSE
CLOSE