From execution videos to cats of mujahdeen: How do social media companies regulate terrorist content
MacKenzie F. Common is a fourth year PhD student in Law at the London School of Economics (LSE). Her research focuses on the content moderation processes at social media companies and argues that many of their practices are problematic from a human rights law and rule of law standpoint. Her work on social media recently won the Google Prize at the Bileta (the British and Irish Law Education and Technology Association) conference. MacKenzie has an LLM in International Law from the University of Cambridge, where she completed a thesis on the challenges of regulating hate speech on social media. She also holds an LLB (Graduate Entry) from City University of London and an Honours B.A. in Political Science from the University of Guelph (Canada). At Guelph, MacKenzie wrote a senior-year research dissertation on how hate groups in Canada and the US use the Internet.
A fully referenced article is available here.
Until 2014, most discussions about social media focussed on its positive effects for democracy and human rights. This was exemplified by the Arab Spring, where Peter Beaumont, a journalist for The Guardian, opined that ‘[t]he barricades today do not bristle with bayonets and rifles, but with phones.’ Then, in August 2014, the popular narrative changed when the upstart terrorist group ISIS posted a video of journalist James Foley being beheaded on social media. The group continued to use social media for publicity, recruitment, and intimidation, prompting a global reappraisal of the merits of social media and its lack of regulation.
Now, politicians and users alike demand that social media platforms identify and remove terrorist content as quickly as possible. Theresa May, for example, stated in a speech at the United Nations General Assembly that tech companies must go ‘further and faster’ in removing terrorist content. In our collective rush to respond to this new threat, however, we have failed to ask important questions about how terrorist content is regulated on social media. This ignorance has resulted in a reliance on private-sector censorship without any of the safeguards that are available in a public institution.
One major issue is that social media companies do not provide the public with enough information about how they define a terrorist group. There is no universally accepted definition of terrorism and even experts on the area acknowledge the semantic difficulty of creating one. I suspect that social media companies have often employed US Supreme Court Justice Stewart’s approach to defining hard-core pornography in Jacobellis v Ohio: ‘I know it when I see it.’
Creating a definition, however, is only half the battle (and arguably the easier half). The challenge comes when moderators have to evaluate the activities of real groups against this broad set of parameters. These platforms do not state who they consider to be a terrorist although it’s clear they have a master list as they make their moderators memorise their faces. We should all be concerned about how these platforms define terrorist groups because inclusion or exclusion from social media affect the legitimacy, publicity, and political power of any group. These decisions must be made in a reasoned, accountable way and there must be an opportunity for individuals or groups to appeal this categorisation just as there are mechanisms in EU law to apply to be removed from the EU terrorist list.
Another interesting question is whether it is appropriate to ban all content from members of a terrorist group or only content that violates other terms and conditions on the platform (such as the prohibition of violent content or hate speech). Social media companies were catalysed by ISIS’s use of social media to post execution videos so most of the prohibitions of terrorist material can be found in the rules banning graphic content. These prohibitions, however, are often enforced against all content emanating from a terrorist group, whether it’s the famous ‘Cats of Mujahadeen’ or dating profiles by violent white supremacists. Monica Bickert, Facebook’s head of global policy management, has said that terrorists are not allowed on Facebook even if they don’t post about terrorism: ‘[i]f it’s the leader of Boko Haram and he wants to post pictures of his two-year-old and some kittens, that would not be allowed.’ While it is perfectly permissible for social media companies to ban any individual or group, there is a lot of confusion over whether these are life-time bans and whether banning individuals could ever be considered a violation of their right to expression and to receive information. After all, the US Supreme Court in Packingham v North Carolina held that a law prohibiting convicted sex offenders from using social media is a violation of the First Amendment. These platforms must provide more detailed rules that provide clarity and certainty to users.
Finally, the effects of outsourcing censorship to a private company must be addressed. Traditionally, private companies have been legally permitted to take actions that would be considered human rights violations if they were a public institution. There are now growing concerns that Western governments who espouse a strong commitment to human rights are trying to back-door censorship by requiring companies to remove content that would be the subject of judicial hearings if the government took action. A good example of this is the Network Enforcement Act in Germany, where if illegal content is not removed within 24 hours, platforms can face a fine of up to 50 million Euros. Russia, Singapore and the Philippines have announced they will be drafting similar laws. This is problematic because the focus of Germany has narrowed to a single time-frame without considering the implications of demanding companies make difficult decisions about free expression with no legal oversight. This is exactly the wrong way we should be moving, deprioritising social obligations and human rights law.
In conclusion, governments and NGO’s must ask more of social media companies than simply to demand terrorist content be reported and removed. Platforms must be clear about who they consider to be a terrorist, what content is deemed terroristic, and there must be transparency with users and concerned parties alike about who they’ve designated a terrorist organisation. There should also be the option to appeal this designation, just as there is at a governmental level, as it would have a huge impact on a group’s ability to participate in our social-media centric world.
Is mise le meas,
MacKenzie F. Common