Human rights defenders are not terrorists, and their content is not propaganda
By Dia Kayyali
Counter-terrorism is not a field that is known for its respect of human rights. The word terrorism, though it does not have any commonly agreed upon legal or societal definition, has often led to the implementation of repressive measures, not only in authoritarian regimes, but also in countries recognized as democracies. The United Nations Office of Counter-Terrorism notes “the shrinking space for human rights defenders and civil society actors to exercise their freedoms as a consequence of counter-terrorism measures that are not human rights-compliant.” That space is not only shrinking in the physical world- it is rapidly disappearing online as well.
It is never popular to argue against measures that promise (even untruthfully) to make people safer, but WITNESS and our allies are here to remind policy makers that human rights cannot be a victim of security theater.
Efforts to fight “Terrorist and violent extremist content”
Last year, a trickle of bad online counter-terrorism policies turned into a flood. Policymakers in government and at companies are working at a fevered pitch to push these policies forward, and WITNESS has tried to serve as a voice of reason in this conversation. Our video op-ed in the New York Times with Hadi al-Khatib of Syrian Archive explains why. As we have been pointing out since 2017, the growing rush to “eradicate” so-called “terrorist and violent extremist content” (TVEC) from the Internet is leading to the deletion of hundreds of thousands– perhaps even millions- of videos documenting protests, conflicts, and more.
The following efforts around “TVEC” are currently underway:
- The Organisation for Economic Co-operation and Development is developing standardized reporting protocols on the removal of “terrorist content”. While this sounds positive, as the Australian Associated Press points out the reporting will be used to “to give governments a better point to start from in getting social media companies to do better and learn how to make sure terror and extremist content is taken down as quickly as possible.” Like other initiatives aimed at “TVEC,” the OECD reporting mechanism appears to focus on quantity and speed of removal as a marker for “doing better.”
- Under the auspices of the Christchurch Call to eliminate terrorist and violent content online, the New Zealand government and other stakeholders have developed a crisis response protocol that is meant to deal with attacks that have an online component. The Protocol is not publicly available, and the governments in the Christchurch call haven’t given WITNESS or other members of civil society significant opportunities to provide feedback. (Update 24 January: It’s also worth noting the EU Crisis Protocol, which is meant “to contribute to efforts undertaken at global level in the context of the Christchurch call, in particular the Crisis Response Protocol…”)
- The European Union’s “Dissemination of terrorist content online” proposal is currently in trilogue negotiations between the European Parliament, the Council of Europe and the European Commission. The proposal was somewhat improved in Parliament thanks to civil society critiques, but it is likely that the worst provisions will return in trilogues. Those provisions include the requirements that platforms remove content within one hour of receiving a removal order and that platforms use “automated upload filters” (machine-learning algorithms) to detect and remove suspected terrorist content. The original version of the proposal also allowed states to authorize any body to issue removal orders, regardless of the need for due process and independence.
- The Global Internet Forum to Counter Terrorism (GIFCT) is a tech company body that is “dedicated to disrupting terrorist abuse of members’ digital platforms.” Until now it has mainly served as a way for companies to share information about content they have removed, and it has been critiqued for lack of transparency. It released its first transparency report last year, but the report contains very little information. Now it is being restructured into its own organization that will take on a much larger role. It has also created a “content incident protocol” (CIP) to deal with an ongoing situation like the Christchurch attack. The CIP is not publicly available.
- Tech Against Terrorism, a public-private partnership launched by the United Nations, is developing a Terrorist Content Analytics Platform (TCAP) with the support of Public Safety Canada that will enable researchers and small companies to analyze “terrorist content.” This platform could have an influence on what measures small companies use to detect and remove content.
A reckless timeline and closed conversations
At the end of 2018, the EU took on the dubious honor of leading the charge to recklessly take down so-called TVEC with its “Dissemination of terrorist content online” proposal. WITNESS’ policy advocacy has traditionally focused on companies, but we knew we had to get involved. The policy was being considered without even a mention of how it could threaten free expression around the globe. What’s worse, the voices of people whose content could be taken down, as well as those affected by terrorist attacks, were no part of the conversation. We led a civil society letter pushing back on the dangerous policy.
A few months later, in March, the livestreaming of the murder of 51 people at a mosque in Christchurch, New Zealand added fuel to the fire of reactive and poorly thought out regulations. The New Zealand and French governments came together to issue the Christchurch call to eliminate terrorist and violent content online. In addition to many countries, initial supporters of the call included Facebook, YouTube, and Twitter. Despite the fact that the attack was carried out by a white supremacist, policy makers are pushing the Call forward with tools that were developed only to find “Islamic extremist” content, such as the Global Internet Forum to Counter Terrorism. Again, the voices of people who have been affected by white supremacism, as well as Hindu and Buddhist extremism, have been barely involved in the conversation. Policymakers didn’t even know about the erasure of human rights content. We co-wrote a white paper with the Electronic Frontier Foundation and Syrian Archive about the danger to human rights content that we sent to the New Zealand government, and we joined the Call’s “Advisory Network” over the summer.
In September, as part of the Advisory Network, WITNESS spoke to world leaders at the United Nations about the need to center the human rights both of those most affected by such content- such as the New Zealand Muslim community- and those affected by attempts to remove such content- such as Syrians and Yemenis. And in December, WITNESS attended an “Incident Response Workshop” in Wellington, New Zealand, to ensure that the human rights perspective is not ignored as the Christchurch call and other efforts move forward.
Our work on the Christchurch call, the restructuring of the Global Internet Forum to Counter Terrorism, and European policy isn’t finished, and we expect to see even more dangerous proposals this year. Being embedded in this work provides us with many opportunities to highlight missing voices and suggest measures to make the conversation more inclusive. It also, unfortunately, provides us with countless opportunities to point out the bias embedded in this work. 2020 will undoubtedly bring more often well-intentioned but incredibly dangerous proposals that will wrongly associate all human rights content from the Arabic speaking and Muslim world with terrorism and violent extremism. And as protests spread across the globe from Chile to Iran, we are worried that even greater swathes of content will be caught up and improperly deleted- after all, it wouldn’t be the first time in history that people fighting for basic human rights are labeled as terrorists by those in power.
We won’t stop until these biases are recognized and addressed, and companies and policymakers view human rights content as essential, instead of acceptable collateral damage in the “fight against terrorism.”
This work was originally published by WITNESS and is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.