logo
logo

NEWS

X’s policy changes stifle human rights work and investigations in Syria, Palestine, elsewhere

By Maria Mingo, Ahmed Zidan, and Itxaso Domínguez de Olazábal

Blog post by Mnemonic/Syrian Archive and 7amleh.

21 November 2023

X needs to assess the impact of its platform policy, design, and functionality changes on vulnerable communities, including activists and human rights defenders in conflict zones documenting atrocity crimes. Failing to do so can hamper human rights work and investigations in Syria, Palestine, and other areas.

Erratic decisions by X, formerly Twitter, have unfortunately become a common sight since Elon Musk’s takeover in October 2022. Between dismantling its human rights and communications teams and Trust & Safety Council, upending the platforms’s verification system, reducing the number of content moderators, and suspending journalists who report on Musk himself, to name a few changes, X has sent a clear signal to the world that  human rights and transparency standards have taken a backstep on the platform. 

Given the widespread use of social media as reporting platforms for human rights abuses and international crimes, X’s policies can cause measurable harm. Disregard for the real-life consequences of new policies or changes to existing policies can have a tangible impact on both vulnerable communities, especially those bravely documenting egregious human rights violations and atrocity crimes, and those using the platform for awareness-raising and justice efforts. Policy changes, if their risks are not thoroughly assessed, can affect whether and how the information they collect can be used for these aims.

Particularly impactful are security policy changes as activists using the platform to report on human rights violations do so at great personal risk. For example, in early 2023, X restricted SMS two-factor-authentication to verified users, effectively removing this security feature for activists in countries like Syria where verification is not possible with a Syrian phone number due to U.S. economic sanctions. While authenticator apps can help circumvent this problem, not everyone is aware of this option or is able to use it.

Moreover, even less evident policy changes can have a significant human rights impact. To highlight the need for regular, effective, and meaningful human rights risk assessments for all policy changes, this article highlights the impact of four X policy changes from 2023 that may not immediately seem to impact human rights, but do:

  1. Introducing a policy to remove accounts after 30 days of inactivity. 
  2. Promoting community notes as the primary verification mechanism and prioritising warning labels over content deletion.
  3. Requiring verification to use basic features and to access research tools.
  4. Charging for the full use of X’s API.

Even these types of policy changes can have profound implications for human rights activists and investigators in Syria, Palestine, and elsewhere. Here is how:

Increased security risks for activists documenting abuses

Activists, journalists, and everyday people in countries like Syria and Palestine who witness and courageously document airstrikes on civilian infrastructure, extra-judicial executions, violence against civilians, or other war crimes and human rights violations often need to delete sensitive information from their phones to avoid arrest when crossing checkpoints or experiencing other similar situations of surveillance and control. Posting information on social media is often a way to minimise the risk of losing important information documented and to raise awareness of the hardships faced.

The new inactivity policy is particularly problematic in contexts such as Syria and Palestine where activists and journalists are at risk of detention. Accounts belonging to activists who have been detained for several years are at risk of deletion. Prolonged internet shutdowns, as seen currently in Palestine, could further result in inactivity. In addition to the physical and digital risks involved with their work, under the new inactivity policy, their accounts could now be removed and the information they took great risks to collect would likely be lost.

The new verification requirement also creates security concerns for Syrian, Palestinian, and other researchers working to identify and preserve potential evidence posted on social media platforms. Verification means providing personal information such as their phone number to X. Since Musk took over, X reportedly complied with almost all government requests for user information. Now it has stopped disclosing its response to such requests. In the wrong hands – like authoritarian regimes – this information can be used to target human rights activists or their families. Syrian and Palestinian researchers, for example, risk receiving threats from alleged perpetrators or others due to their investigative work. The verification requirement is therefore a direct affront to their privacy and security, especially now that X has announced it will collect additional data from its users. With this new change, researchers are forced to choose between their security and being able to effectively do their human rights work.  

Increased inflammatory speech, disinformation, and risk of violence

As mentioned above, X’s recent decisions have demonstrably failed to ensure the security of users and other rights-holders. The changes to the verification requirements, combined with the firing of content moderators, has made the platform rife with hate speech and incitement to violence leading to real-world attacks against Palestinians and other at-risk populations. The tools being deployed by the platform to mitigate the impacts of harmful content do not appear to have the intended effects and risk real-world violence as outlined below.

In line with a concept of “soft moderation”, X has opted to apply warning labels instead of deleting content that violates X’s policies, including inflammatory language seemingly inciting violence. According to X’s guiding principle of “freedom of speech, not reach”, the platform should influence the visibility of harmful tweets. However, warning labels often seem to have the opposite effect while exacerbating the risk of offline violence and threatening the work of human rights defenders. For example: 

  • When issued or reposted by high-profile individuals with a substantial following base, messages of alleged violence are often widely disseminated, despite warning labels. 
  • To overcome the warning labels’ effect, users can record and disseminate screenshots of the content to avoid future labelling and magnify the posts’ reach. 
  • The warning label system is sometimes used against human rights initiatives, labelling them as “potentially containing sensitive content”. 

The prioritisation of community notes as a crowdsourced fact-checking tool to combat disinformation also falls short of its goal. The decision to display a note publicly beneath a post means fact-checking hinges on subjective judgments made by contributors, leaving users to determine the validity of these comments. Enabling non-experts to assess the accuracy of content risks further censorship and suppression, including human rights organisations and activists. This is particularly problematic in Palestine, where the recent violence has revealed increased inflammatory speech and language that may amount to incitement to violence. The accompanying community notes seem to often validate acute disinformation and misinformation, heightening the risk of real-world violence against Palestinians. 

For example, Agnes Callamard, the Secretary General of Amnesty International, expressed the organisation’s apprehension about civilian casualties in Palestine. A subsequent community note accused her of employing a “twisted logic” to blame civilians, attempting to inappropriately erode the author’s credibility by generating the false impression of factual error. In this instance, the note was removed after several alerts from users, revealing that X appears to be quicker in handling its volunteer features than in addressing inflammatory speech.

Hampering human rights investigations

Policy changes not only can affect those on the ground collecting and posting human rights content, but also those working to use the information for accountability. X’s initially proposed limitation on the amount of posts users can view per day undermined the most basic open source research and investigative tasks conducted on X. Under the announced policy, allegedly issued to “address extreme levels of data scraping and system manipulation”, verified accounts initially would only be able to view up to 6,000 posts per day; unverified accounts could view up to 600 posts per day; and newly-registered unverified accounts could view up to 300 posts per day. Shortly afterwards, the limitations were reduced to 10,000, 1,000, and 500 respectively. 

Such policy changes are problematic as online platforms have become accidental and unstable archives of millions of human rights photos and videos with important evidentiary potential for justice efforts. Research activities conducted by civil society organisations and academia are crucial to ensuring this information can still be used for investigations. While X has since quietly backtracked this controversial policy altogether, which is now nowhere to be found on X, the short period of enforcement hindered human rights work during that time. The fact that the policy retraction was never openly acknowledged also creates uncertainty and risk for future human rights work, in particular for preservation and investigation efforts. 

The use of X’s free API, fundamental to research activities and monitoring content at scale, is now indeed limited to viewing 1,500 posts per month. The API dictates researchers’ access to viewing posts, as well as the number of posts they can download and how often they can download them. While the API can still be used for free, individual researchers’ daily searches to identify new potential evidence of atrocity crimes may not show all results under the free version, even if operated across multiple days. This causes investigative delays and limits access to vital information on ongoing events as they occur. Even if probative footage can be found, there is increased risk that it might not be accessed and archived in time before it is taken down.

X’s further limitation of tools like XPro, formerly TweetDeck, to verified subscribers only added to this problem. Such tools that display multiple accounts at the same time are important for conducting real-time monitoring of human rights incidents. They allow researchers to follow the unfolding of events through posts as they are updated, allowing for better understanding of contexts and sources, and enabling the quick discovery and archiving of posts. Losing access to such tools impacts researchers’ access to potentially relevant information and slows their research abilities. 

Recommendations

Even the seemingly mundane platform policy changes can have a severe impact on human rights reporting, research, preservation, and investigation into conflicts in countries like Syria, Palestine, and elsewhere. The European Union (EU)’s Digital Services Act (DSA), which came into effect for X on 25 August 2023 is an important tool to regulate social media platforms’ responsibilities within and beyond the EU. To comply with the DSA, X needs to take concrete steps to prevent and remedy the negative impact of its policies, such as:

  • Conducting thorough, meaningful, and periodic risk assessments in line with article 34 of the DSA before introducing new, or changing existing, policies, with special consideration for human rights violations and documentation and open source investigation. This requires involving civil society and other stakeholders in all stages of product development, and refraining from announcing upcoming policies until such assessment is completed. This would also be in X’s interest to avoid needing to backtrack any such decisions.
  • Mitigating the potential risks of potentially harmful policies in line with article 35 of the DSA. Examples in this case could include creating an inactivity policy exemption for accounts located in conflict zones such as Syria and Palestine, and introducing verification exceptions for human rights organisations conducting open-source investigations and media outlets. Moreover, reassessing the functionality of community notes so they do not inadvertently undermine the credibility and significant contributions of activists and human rights defenders.
  • Granting free API access for the public, or exemptions for human rights researchers, journalists, and academics. This may be particularly relevant for granting data access to researchers in accordance with article 40 of the DSA to identify systemic risks to the EU (and beyond), including conflict-related propaganda, disinformation, incitement to violence, and online recruitment to armed forces or terrorist organisations taking place using platform services.
  • Transparently devoting substantial resources and efforts to apply a human rights-centred approach when crafting and implementing content moderation policies and procedures. This includes increasing diversity and mental health support for content moderators, crucial to effectively combat censorship, hate speech, incitement to violence, and the spread of disinformation.

Failing to adopt and implement these recommendations will continue to increase security risks and tangible harms for Syrian, Palestinian, and other activists and communities courageously documenting and reporting human rights violations. X needs to take concrete action to  ensure these efforts are not in vain and allow for potential evidence posted from these conflicts to be used for accountability. 

***

About the authors

Maria Mingo is the Policy and Advocacy Manager at Mnemonic and Ahmed Zidan is the former Social Media and Communication Manager at Mnemonic’s Syrian Archive. Syrian Archive preserves, verifies, and investigates human rights social media content for conflict memorialisation and accountability. Mnemonic also hosts archives related to Sudan, Ukraine, and Yemen.

Itxaso Domínguez de Olazábal is the EU Advocacy Officer at 7amleh - The Arab Center for the Advancement of Social Media, which aims to create a safe, fair, and free digital space by monitoring and documenting digital rights violations against Palestinians in online spaces.

logo

Mnemonic is an NGO dedicated to archiving, investigating and memorialising digital information documenting human rights violations and international crimes. Mnemonic also provides trainings, conducts research, engages in content moderation advocacy, and develops tools to support advocacy, justice and accountability.

Donate
aboutcontactpressprivacy policyimprint