Facebook says artificial intelligence has sped up removal of terrorist content
Facebook says it's able to remove 99% of Islamic State and Al Qaeda terrorist content before it's flagged by users thanks to advances in artificial intelligence that are helping stop the spread of terrorist content.
Source: Jessica Guynn
Once Facebook is aware of the terrorist content, it removes 83% of it within an hour of it being uploaded, the company said Tuesday.
"We are encouraged by these numbers but we know we have to work to do and we are working on getting better and faster," said Monika Bickert, who runs global policy management. She would not say how much terrorist content — how many posts, images, videos and the like — is removed from Facebook.
"The 99% and 83% that are removed are pretty impressive on the surface but the question is: Are we talking about 1,000 videos or 10,000, or 100,000?" says Seamus Hughes, deputy director of the program on extremism at George Washington University. "What's glaringly missing are the actual hard numbers."
Attacks on Western targets and sharp criticism from European officials have intensified pressure on Facebook to crack down on terrorist activity. European officials are calling on technology companies to take more responsibility for the content on their networks and to provide more information to authorities that could help them foil or investigate attacks.
Terrorist groups use popular Internet services such as YouTube, Twitter and Facebook to spread propaganda, attract and train new recruits, celebrate terrorist attacks and publicize executions. They also use messaging services to communicate.
Facebook says it's rooting out and removing extremists' propaganda and messages by using sophisticated algorithms to mine words, images and videos.
Artificial intelligence can't do the job alone, so Facebook has a team of more than 150, including counterterrorism experts, who are dedicated to tracking and taking down propaganda and other materials. It's also collaborating with fellow technology companies and consulting with researchers to keep pace with the ever-changing social media tactics of the Islamic State and other terror groups.
Facebook is part of the Global Internet Forum to Counter Terrorism, which was formed in June with companies such as Google, Microsoft and Twitter. The companies share a database of "hashes" — essentially digital fingerprints — to track and take down videos and images that appear on their services.
Facebook says it has prioritized going after Isis and Al Qaeda because it believes they pose the greatest threat globally but it plans to expand to terrorist content posted by other extremist groups.
The challenge: When chased from Facebook, terrorists set up shop elsewhere. Terrorism experts say extremists have decamped to smaller outfits such as messaging service Telegram, which use end-to-end encryption.
Some of the techniques being developed by Facebook to combat terrorism can be deployed in other areas including Russian propaganda, Bickert said.
A Russian organization linked to the Kremlin targeted unsuspecting Americans with Facebook posts and ads to stoke outrage over polarizing issues from gay rights to gun rights in the tense political climate surrounding the 2016 presidential election.
Executives from Facebook, Twitter and Google were recently summoned to a series of hearings on Capitol Hill to answer questions about election interference by Russians on their platforms. Russia has denied any meddling in the election.
"Facebook is having conversations with Capitol Hill at a higher clip than before," Hughes said. " It's going to raise the question: If you can do this on terrorist content, then why can't you do this on fake news and Russian propaganda?"