Shortly after Muslim suicide bomber Salman Abedi blew himself to smithereens at a concert in Manchester, taking down 22 people with him, British Prime Minister Theresa May advised internet providers that they were duty-bound to monitor and stop the spread of jihadist propaganda.
“We cannot allow this ideology the safe space it needs to breed,” she declared in a stinging critique. “Yet that is precisely what the internet — and the big companies that provide internet-based services — provide.”
Frustration is what probably impelled the prime minister to unleash this scathing comment. For the past few months, Britain has been besieged by a spate of terrorist incidents. Before Abadi detonated his suicide vest in Manchester, three knife-wielding jihadists killed seven people on the London Bridge and at an adjacent market.
Tellingly enough, one of the killers, Khuram Shazad Butt, was radicalized by a YouTube video extolling Islamic fundamentalism. But Butt was not an anomaly. Cheaply-produced videos on the internet have influenced a generation of Muslim extremists. Islamic State, the jihadist organization, and Al Qaeda, its ideological bedfellow, have used such videos to brainwash and recruit tens of thousands of followers around the world.
Neo-Nazis have also taken to social media to advance their cause.
According to one estimate, 400 hours of video content are uploaded to YouTube every minute, and YouTube has lacked the capability to police it.
As Theresa May suggested, this is an urgent problem that cannot be left to fester. The status quo is unsustainable and untenable.
Two recent announcements by American internet companies suggest that extremists of all stripes will have a harder time exploiting social media platforms to disseminate their vile material.
On June 15, Facebook announced that inappropriate and offensive content, such as videos of Islamic State beheadings, will be removed by means of artificial intelligence technology and human moderators.
To its credit, Facebook tacitly admits it has been lax until now in enforcing its guidelines. “Tragically, we have seen more terror attacks recently,” said Monika Bickert, Facebook’s head of global management policy. “As we see more attacks, we see more people asking what social media companies are doing to keep this content offline.”
On June 18, YouTube’s parent company, Google, acknowledged its shortcomings as well. Announcing a series of policies to curb the dissemination of objectionable videos on its platform, Google promised to identify and remove them as quickly as possible.
Videos that promote and encourage hate speech and terrorism, for example, will be removed expeditiously. They will be replaced by videos aimed at persuading potential Islamic State recruits to rethink their views. This counter-radicalization campaign is long overdue, but if it sway minds and deprives Islamic State of its foot soldiers and propagandists, a new chapter in the important and unceasing battle against Islamic extremism will have been opened.
Islamic State is already under duress, with Kurdish, Arab and Western forces fighting to liberate the Iraqi and Syrian cities of Mosul and Raqqa from its grasp. If Facebook and YouTube can inflict further blows on Islamic State by depriving it of new followers, all the better.
Whenever possible, Facebook and YouTube should strive to strike a proper balance between free expression and access to information. But at the end of the day, the emphasis must be on identifying and deleting videos that strengthen extremism and empower groups like Islamic State.
As Theresa May correctly said, Facebook and YouTube cannot allow jihadists and their fellow travellers to win their battles on the backs of Western democracies. This would be the height of folly.