Why a Small Change at Twitter Could Have Big Consequences for Deadly Conflict
Why a Small Change at Twitter Could Have Big Consequences for Deadly Conflict
Elon Musk, Chief Executive Officer (CEO) of SpaceX, Tesla and Twitter, speaks during the POSSIBLE conference, in Miami Beach, Florida, U.S. April 18, 2023. REUTERS / Marco Bello
Commentary / Future of Conflict 11 minutes

Why a Small Change at Twitter Could Have Big Consequences for Deadly Conflict

As Twitter limits access to a tool to analyse conversations on the platform, researchers will be deprived of information that sheds light on political hate speech and incitement to violence. That will have real-world implications for tracking election meddling, disinformation campaigns and human rights abuses.

The fight to counter disinformation just got tougher. In April, Twitter limited access to its “application programming interface” or API, a computer protocol that enables application software to communicate efficiently and accurately. Since Twitter was founded in 2006, its free API policy has allowed millions of users to extract data from the platform. Access to its massive public data sets allowed open-source intelligence researchers to share the information and analyse the influence of social media on politics, including its role in election meddling; harassment campaigns against opposition figures and civil society activists; and the incitement to violence and stoking conflict. The new access restrictions, if they stand, could obscure these critical insights into social media’s influence.

After seventeen years of free access to millions of tweets per month, the fees Twitter recently began charging will price out most users. Twitter’s Simple Package costs $42,000 per month for access to 50 million tweets – about 0.3 per cent of the 375 million tweets posted daily on the platform – whereas the old, free API provided access to 1 per cent. More expensive packages offer access to 100 million or 200 million tweets for $125,000-210,000 per month. Some private firms and wealthy institutions might be able to afford the $0.5-2.5 million annual tab, but not most policy shops, NGOs, independent researchers or students outside well-funded universities.

Limiting access to API to the wealthy will also reinforce the Western bias of disinformation research. Although research on disinformation in the Global South is growing, it is dwarfed by the analytical attention Europe or the U.S. receive. When research on the online sphere does examine the Global South, it often focuses on Russian and Chinese influence operations rather than local conflict dynamics. The small internet observatories such as AfricaCheck and DoubleThink Lab that play an outsized role in places such as Africa and Asia will be among the most affected by the change at Twitter. Twitter stated that it will find an alternative for academics, but it has not provided details. Nor has it announced whether it will make provisions for other kinds of researchers. The only exception is for verified public services to disseminate weather, transport and emergency notifications. This policy will significantly reduce information about the impact of election-meddling campaigns, the online harassment of activists and effects of disinformation on violence in countries where rule of law is fragile and independent media outlets do not or have ceased to exist. It is, of course, precisely in these places where it is most imperative to identify and discredit disinformation campaigns.

The API’s Importance in Conflict

Twitter’s free API policy has facilitated the discovery of some of the most infamous influence operations. For example, the activities of the Kremlin-backed Internet Research Agency (IRA), the notorious “troll factory” that meddled in the 2020 U.S. elections, would have been much harder to expose without it. The IRA also was responsible for organising and coordinating misinformation campaigns in Africa and in the Middle East. On 4 April 2017, according to UN investigators, the Syrian air force attacked the rebel-held village of Khan Sheikhoun in northern Syria with sarin gas, which killed approximately 100 and injured at least 200 people. To obscure the horrible cost of Russia’s support for President Bashar al-Assad’s regime, the Russians promoted falsehoods about the Khan Sheikhoun sarin gas attacks and attempted to delegitimise the Syrian White Helmets and other humanitarian organisations working in rebel-held areas of Syria. Between 2016 and 2017, the Russian disinformation campaigns attacking the White Helmets are estimated to have been viewed 56 million times on Twitter.

This research better prepared civil society organisations such as the Centre for Democracy and Development-West Africa in Nigeria, Verificado in Mexico, Correctiv in Germany and Zaśto Ne in Bosnia and Herzegovina to identify and combat disinformation campaigns. Twitter’s API helped researchers discover IRA-linked coordinated disinformation activities aimed at supporting certain candidates in elections or sowing distrust in electoral processes in several African countries. In Mozambique, the Russian operation supported the president and downplayed electoral fraud claims by the opposition. In the Democratic Republic of Congo, after a contentious election, it published content attacking the president and other major political figures. These insights are invaluable in determining the scope and nature of disinformation campaigns, as well as in developing appropriate and effective countermeasures.

Journalists and researchers using the Twitter API … have exposed and determined responsibility for manipulation campaigns in conflict settings and during peace talks.

Journalists and researchers using the Twitter API also have exposed and determined responsibility for manipulation campaigns in conflict settings and during peace talks. During the 2019 battle for Tripoli, Libya, accounts linked to both Khalifa Haftar’s Libyan National Army and the Government of National Accord ran aggressive disinformation campaigns to discredit their opponents and push their favoured narratives. Online campaigns during successive rounds of negotiation in Libya helped doom a power-sharing agreement. Similarly, in Colombia social media campaigns played a big role in undermining the legitimacy of the peace talks between the government and the Revolutionary Armed Forces of Colombia (FARC). The national referendum on the deal ultimately failed.

The 2017 rift between Saudi Arabia and the United Arab Emirates (UAE), on one hand, and Qatar (backed by Türkiye), on the other, was widened in no small part online. The quarrel, however, had very real consequences, including damage to the Qatari economy and proxy competition through rival militaries and political forces in Libya, Somalia and elsewhere. Evidence emerged of Saudi and Emirati accounts engaging in online disinformation activities; Qatar did not respond with similar campaigns but Türkiye did. Further information manipulation campaigns backed by Saudi Arabia and the UAE followed across the Middle East, North Africa and the Horn of Africa, as these regional powers came to see disinformation as a cheap and effective tool to serve their interests and boost the profile of their proxy allies.

Social media accounts operating from Saudi Arabia and the UAE have recently run misinformation campaigns to support the Rapid Security Forces paramilitary group fighting the Sudanese army for control of the country. Without robust and accessible APIs, emergent threats with life-and-death consequences could be overlooked. There is widespread agreement that Facebook’s algorithm prioritised certain posts that contributed to the genocide of the Rohingya in Myanmar in 2017. It is difficult to gauge the role the social media company played, however: as the report of the UN International Fact-Finding Mission on Myanmar, which investigated the murder of at least 24,000 people and the displacement of almost a million Rohingya refugees put it, “country-specific data about the spread of hate speech” on Facebook’s platform is “imperative to assess the adequacy of its response”. It was the sort of information that could have been obtained if Facebook had allowed researchers to use an open access API, as Twitter did at the time.

Today’s conflict in Sudan exemplifies, in real time, what has been lost. As violence between rival generals and factions of the security establishment rages, cases of inauthentic online behaviour and hijacked accounts have come to light. In an echo of the 2019 Tripoli offensive in Libya, it seems that inaccurate information might have intensified fighting and confused and misled civilians trying to flee the violence. Unfortunately, only a handful of individuals are now able to examine even partial data, instead of the hundreds of researchers and journalists who could have further analysed initial findings and worked toward solutions, such as conducting a comprehensive removal of posts, promoting accurate information for refugees from credible organisations and running fact-checking campaigns. In 2024, national elections are scheduled in Chad, Mali, Rwanda, Somaliland, South Sudan, Pakistan, Sri Lanka, Tunisia and Venezuela. Understanding and exposing disinformation campaigns around contested elections is crucial for stability.

Disinformation analysis has advanced considerably since Twitter’s API was released in 2006. Analysts and researchers have progressed from reactive research to real-time applications. For example, using a combination of social media data and other open-source intelligence data, they were able to predict the date of the start of the war in Ukraine, monitor Russian troop movements and raise the alarm about possible war crimes. But such “emergent action research” cannot happen without data.

The Wrong Solution to a Hard Problem

Elon Musk, who acquired Twitter in 2022, describes his decision to restrict API access as part of an effort to fight disinformation on the platform. Disinformation has been a preoccupation for him: before he bought the company and took it private, he expressed concern over its large number of bots (computer-run Twitter handles) and fake accounts. The API policy, he said, “was abused badly by bots scammers and opinion manipulators”. By charging for Twitter API access, Musk tried to address two of his priorities at once: reducing the number of automated accounts and enhancing the company’s revenues. He further reduced costs by firing teams of Twitter staff working on human rights, policy, ethics, and safety and moderation. In their place, he plans to rely more heavily on artificial intelligence systems to identify and remove harmful content and on community crowdsourcing for moderation. He claims this approach will be more effective.

Musk also plans to target disinformation by accelerating the roll out of Community Notes, a project Twitter had started experimenting with in 2021. Community Notes allow users to anonymously provide context for a tweet, challenge the claims made by the author, and provide more sources or additional information. A note considered to be helpful by enough Twitter users will feature prominently underneath the original tweet. Other platforms such as Reddit and Wikipedia have applied this moderation-by-consensus model with mixed results. It worked well in communities with a large, active base of users and moderators, while in smaller or more polarised communities, it was prone to manipulation and reinforcement of political biases. Twitter could well see the same result. With governments and well-endowed political parties organising disinformation campaigns, Community Notes themselves could be susceptible to manipulation.

[Without access to data through the API], it is impossible to determine whether Twitter's algorithm is systematically removing harmful content.

To Musk’s credit, Twitter has published the source code of its recommendation algorithm – but the benefits of this openness are blunted by the restrictions on access to API. The recommendation algorithm determines what users see when they open the platform, irrespective of what or whom they follow. Though it is unclear how much information one can glean from the algorithm alone, the world now knows, in theory, how Twitter determines what comprises harmful content. Such transparency measures are necessary to fight disinformation, but they risk deteriorating into an ineffective “transparency-washing” exercise if external researchers are unable to evaluate the platform’s claims. The disclosure of its recommendation algorithm, for instance, revealed that content related to Ukraine might be demoted. If researchers had easy access to data through the API, they could have retrieved relevant tweets, analysed their engagement rates and verified whether the algorithm in fact had demoted that content – and, if so, why. In the absence of this analysis, it is impossible to determine whether Twitter’s algorithm is systematically removing harmful content – or if is it itself manipulating information.

It will be similarly difficult to measure the effectiveness of a new artificial intelligence system for content moderation. AI offers significant advantages in terms of scalability but comes with considerable risk because it requires efficiently training machine learning systems. An artificial intelligence program learns to recognise harmful or non-permissible content on the platform by associating words or phrases with specific contexts. The quality of the AI’s language comprehension depends on the quality of its training information.

A May 2021 episode demonstrates the problem of relying on AI for content moderation – and the problem of lost access to API on Twitter. That month, Instagram removed posts containing the word “Al-Aqsa” – the mosque in Jerusalem considered the third holiest site in Islam –because the AI learned the word from a training set that included counter-insurgency texts, associating it with the Al-Aqsa Martyrs’ Brigades, a Palestinian armed group designated as a terrorist organisation by the European Union and the United States. The removal of posts came at a time when the site was at the centre of Palestinian protests, and the mistake did nothing to calm tensions. Instagram’s API at least was open enough to allow researchers to reverse-engineer some of the training datasets, identify their biases and shortcomings, and suggest corrections. Access to the now-curtailed Twitter API could help identify coordinated operations by political groups trying to manipulate the new Community Notes feature.

In essence, Twitter’s new policies, designed to address some of its shortcomings, create new risks while restricting the tools necessary to mitigate them. In April, the company, for the first time, published an incomplete transparency report that offers no insight into its compliance with its own standards or government requests for removal of content.

Will Twitter Retreat?

Musk’s decision to restrict access to Twitter’s API jeopardises the work of hundreds of organisations and independent researchers who in the past sometimes covered for the platform when it did not or could not invest properly in safety and moderation. In no small part due to its relative transparency, it never suffered the serious reputational harm that Facebook incurred in Myanmar, Sri Lanka and Ethiopia. It is not a stretch to say its openness saved lives.

There are technically advanced ways to gather information that work around the API, but other systems of scraping data from Twitter (or any other social media platform) are practical only for data scientists and software engineers. Policy analysts, social scientists, journalists and self-trained researchers – especially from the regions, states and groups under-represented at the highest levels of technology – will not be able to conduct serious research without the open source tools dependent on the API. Even for those with the requisite tech skills, the alternatives leave much to be desired: not only are they complex and time-consuming, but they may also be illegal, since they arguably violate Twitter’s terms of service. Publishing the research and data retrieved via a technical workaround might provoke a ruinous lawsuit, a risk likely to deter the same people who are unable to afford the API access fees. Someday, a viable fix might be found, but not before Twitter’s new policy inflicts a heavy cost on the parts of the world most prone to conflict. These countries, moreover, often lack strong independent media and civil society organisations to help counter harmful disinformation.

Musk has backtracked on some changes, although he seems to have responded more to economic imperatives than public criticism.

It remains to be seen if there will be further modifications to Twitter’s API policy. Musk has backtracked on some changes, although he seems to have responded more to economic imperatives than public criticism. He, for instance, did not react to open letters from civil society organisations or from a group of U.S. lawmakers asking Twitter to restore free API access. He did, however, partially reverse the bots policy when public services providing weather, traffic and emergency updates signalled that they would move to WhatsApp instead of paying for API access. Twitter would have likely lost more in terms of active users and advertisement revenue than it gained from paid API access.

There is an army of online researchers prepared to put in the time to help combat misinformation. Even if Twitter’s financial situation required slashing the teams devoted to safety and content moderation, it could have mitigated the loss by leaning more heavily on collaboration with external experts through the API. It can still make that adjustment. That, however, would require exempting the API from Twitter’s monetarisation push – a move that the company could sell, justly, as putting global public safety and security above its bottom line.

Contributors

Senior Analyst, Social Media and Conflict
ale_accorsi
Profile Image
Laura Courchesne
Non-residential Fellow, Future of Conflict

Subscribe to Crisis Group’s Email Updates

Receive the best source of conflict analysis right in your inbox.