What Facebook Does (and Doesn’t) Have to Do with Ethiopia’s Ethnic Violence
What Facebook Does (and Doesn’t) Have to Do with Ethiopia’s Ethnic Violence
Kenyan lawyer Mercy Mutemi (C) speaks to the media at Milimani Law Courts in Nairobi after filing a lawsuit against Meta for fanning violence and hate speech in Africa, including in Ethiopia during the country’s Tigray War. December 14, 2022. AFP / Yasuyoshi Chiba
Commentary / Africa 13 minutes

What Facebook Does (and Doesn’t) Have to Do with Ethiopia’s Ethnic Violence

A victim’s relative is among those accusing Meta in a Kenyan court of failing to adequately police incendiary speech on Facebook during Ethiopia’s civil war. Much greater effort from the company is warranted. But Meta’s task is hardly straightforward.

In December 2022, the son of slain Ethiopian professor Meareg Amare Abrha was among those who filed a lawsuit against Meta, the owner of Facebook. The constitutional petition, which was filed in Kenya – home to a Meta moderating hub – alleged that the company had evinced a “woeful failure to address violence on the platform” and held it responsible for the November 2021 murder of Meareg, an ethnic Tigrayan living in Bahir Dar, the capital of Amhara region. His son, Abrham, said Facebook users had incited violence against his father, which led to his murder. Abrham and his fellow plaintiffs want Meta to apologise for failing to remove offending posts; contribute around $2.4 billion to funds for victims of hate on Facebook; alter its algorithm so the platform does not spread harmful content; and make its content moderation equitable across languages. Meta said in response that it has invested heavily in content moderation. (This article refers to the company as Facebook in descriptions of situations when it operated under that name or in references to the platform; otherwise, it uses Meta, which became the company’s name in December 2021.)

Meareg is just one of hundreds of thousands of Ethiopians who over the past few years have lost their lives to the country’s sectarian violence. The bulk of this carnage can be traced to the civil war centred on the Tigray region in the country’s north, which began in November 2020, when a constitutional dispute and power struggle between Tigray’s leaders and the federal government spiralled into conflict. That conflict found Tigray also fighting neighbouring Amhara (with which it has been locked in a long-running territorial dispute) and formed the backdrop for the attack on Meareg. A November 2022 peace deal between the federal government and the Tigray region’s ruling party (the Tigray People’s Liberation Front, or TPLF) has brought a welcome respite from the violence in and around Tigray. But peace in the north remains fragile, and Ethiopia continues to face violence and instability elsewhere – particularly in Oromia, where the government is trying to crush an ethno-nationalist insurgency.

While [social media] platforms can be useful in conflict situations...they can also be an amplifier and accelerator of already existing tensions.

Whatever the merits of Meareg’s legal claims, his story highlights concerns about the complicated role that Facebook and other social media platforms have played in this violence. While these platforms can be useful in conflict situations – for example, by helping share information that can protect civilians and document human rights abuses – they can also be an amplifier and accelerator of already existing tensions and a space to carry out targeted attacks on marginalised groups including, for example, women and girls. Meta has a history of underinvesting in moderating content in multilingual Ethiopia, just as it did in Myanmar, where the platform was criticised for the role it played prior to the state’s expulsion of Rohingya Muslims in 2017. To be fair, particularly in countries buffeted by conflict, there are issues that no algorithm or investment can fully address – including the challenges of short-order fact-checking in a complex ethno-political landscape.

So, what should be done? Meta can and should do more to address incendiary content on the platform, including through improved language capacities, larger moderation teams, more research transparency and algorithmic changes that do more to demote potentially inflammatory content. Ultimately, however, critics of Meta need to think more broadly in terms of solutions. Ethiopia needs a healthy, independent media – something successive Ethiopian governments have impeded. Without it, Meta will struggle to verify facts and misinformation. (Of course, the absence of a healthy press also deprives Ethiopians of reliable information about their communities and government; having reliable news sources would better enable them to hold their leaders to account.) For these reasons, building up a capacity for rigorous independent reporting should be part of any strategy for managing social media risks in Ethiopia – and Meta should contribute to that effort.

Weak Tools, Big Challenges

In order to understand how potentially inflammatory content appears on Facebook, it is helpful to trace the approach the company takes to identifying and taking down posts that violate its standards. The company says it aims to remove material that can be classified as hate speech, misinformation and incitement to violence. The first line of defence is an algorithm responsible for detecting and flagging problematic content, though, despite improvements, it is still the case that the platform may promote divisive posts that prove popular with other users. The results are then reviewed by human moderators. Meta also relies on partnerships with civil society organisations and media outlets to evaluate posts. The company can demote problematic content further down users’ newsfeeds, making it less visible, or delete it altogether. It can also ban users who consistently violate community standards from the site.

Given Ethiopia’s large and active online community and contentious politics, it seems clear that Meta has not adequately invested in its detection and moderation infrastructure there. Part of the problem may be the way that the company approaches new markets. While it operates in more than 110 languages, Meta offers content moderation in only 70 of them, usually adding new languages under crisis conditions. In her testimony before the U.S. Senate in October 2021, Facebook whistleblower Frances Haugen claimed that the company’s lack of investment in Ethiopia allowed for problematic content to be widely disseminated. According to Facebook documents that were leaked by Haugen, the company’s hate speech algorithm – responsible for detecting 97 per cent of the hate speech removed from the platform – could not adequately detect harmful posts in Amharic or Oromo, Ethiopia’s most widely used languages. Facebook has since made changes, though their extent and value are still unclear.

The company has also short-changed the second line of defence, human moderators. Based on Haugen’s leaks, researchers at The Continent, a South African media outlet, reported in November 2021 that Facebook had one moderator for roughly every 64,000 Ethiopians. The journalists also found that the company had failed to close Facebook accounts associated with an ethnic Amhara militia that has been accused of committing atrocities, months after it discovered that the groups were disseminating inflammatory content. In addition, a report published in February 2022 by the Bureau of Investigative Journalism and The Observer found that months after Meta claimed it took down harmful content, it was still visible on the platform.

Haugen’s leak showed the company makes investments in specific countries according to a tiering system that assesses the potential for violence and inflammatory content. When the alarm bell goes off that a country has crossed a certain risk threshold, Meta puts together a quick response team. These teams, called Integrity Product Operations Centres, can be operational for up to a few months – which, of course, is not a long-term solution. As a source familiar with Meta’s approach put it to Crisis Group, the company’s strategy runs the risk that the mobilisation of resources only happens when it is already too late. There is also a question about whether its internal resources are adequate to deal with a complex, multilingual and polarised conflict scenario at any stage of development.

Greater transparency with respect to [Meta’s] tiering system...would help outsiders assess the company’s approach to these issues and questions.

Greater transparency with respect to the company’s tiering system and other internal decision-making would help outsiders assess the company’s approach to these issues and questions. Researchers only know that Ethiopia ranked high as of early 2020 thanks to Haugen’s leaks. They do not know where Meta forecasts the next crisis is looming. In 2020, Meta named people to its Oversight Board, a self-governance platform the company billed as part of its efforts to be more transparent and accountable. The board began hearing cases late that year. But even so, the absence of information makes it difficult if not impossible to assess whether the investments and measures taken by Meta – which are also not public – are doing enough either to forecast risk or to address the risks that the company identifies. Consider what happened when the Oversight Board called for an “independent human rights due diligence assessment” in December 2021 to evaluate how effectively Meta’s content moderation performed in countries experiencing high levels of conflict. Meta countered that it had conducted internal due diligence that could not be published for safety and privacy reasons, declared it to be sufficient and rebuffed the Oversight Board’s calls for an independent assessment.

The limits of Meta’s tiering approach can be seen in how it has operated in Ethiopia. Meta only started working on improving the algorithm’s ability to detect problematic content in Amharic and the Oromo language at the end of 2020 – around fifteen years after first becoming operational in the country. (It now supports Amharic, and the Oromo, Tigray and Somali languages.) According to sources close to Meta who spoke to Crisis Group, human content moderators familiar with at least two Ethiopian languages were only hired at some point in 2022 – two years after the brutal civil war that had claimed tens of thousands of lives broke out.

Tough Landscape

This is not to say that Meta is facing an easy task in Ethiopia. The challenges of responsibly operating a social media platform amid Ethiopia’s conflict are enormous. The hurdles include the difficulty of real-time fact-checking and the complexity of Ethiopia’s ethno-political landscape. With fast-moving conflict dynamics and few sources that can provide verified information, it is particularly hard to gauge the difference between unproven information and false information that incites violence and should therefore be removed from the website.

Meta has also understandably struggled with decisions about whom to de-platform in Ethiopia and under what circumstances. The company identifies three categories of “dangerous organisations” whose accounts it may delist. The first comprises organisations and their supporters “organising or advocating for violence against civilians, repeatedly dehumanising or advocating for harm against people based on protected characteristics”. The second is for “violent non-state actors” who generally do not target civilians. The third is for groups that routinely use hate speech and are deemed likely to use violence. This is tricky terrain in Ethiopia. The country is home to many identity-based political constituencies that frequently claim to be victims of violence at the hands of various groups. The prevalence of such claims complicates efforts by an outside party like Meta to adjudicate which groups are in fact “dangerous”. Decision-making invariably involves a degree of subjectivity.

For example, while they are markedly different types of organisations, Tigray’s ruling TPLF party, Amhara militias known as Fano and the rebel Oromo Liberation Army (OLA) all claim to have taken up arms on behalf of their ethnic constituencies. Human rights groups have credibly accused them of committing atrocities against civilians, including widespread sexual violence against women and girls, which could be grounds for de-platforming all of them. Yet Facebook has singled out OLA as a malign actor: according to an October 2021 leak, Meta includes the OLA on its “dangerous organisations” list, in effect stopping the group from maintaining a presence on the platform. It has not kicked off the other militant groups or, for that matter, the federal, Eritrean, Tigray or Amhara governments – even though all have been implicated in human rights abuses. While banning only OLA may have been the right decision, it is by no means clear how Meta made it.

A Broader Problem

Further complicating Meta’s situation in Ethiopia is the weakness of the country’s media landscape. Decades of authoritarian rule made it difficult for a free and rigorous press to emerge. After a diverse coalition of insurgents overthrew the military dictatorship in 1991, a TPLF-led government took charge and established a system of ethnic federalism. A plethora of communally divided media outlets emerged in the aftermath. These outlets operate in different languages, furthering the divergence in perspectives. More recently, the proliferation of partisan satellite television, Telegram and YouTube channels has only exacerbated this dynamic.

What is most needed...is a strong, autonomous domestic press that strives for accuracy and objectivity.

This points to a broader problem plaguing Meta and other social media companies in countries without a healthy media. There are, of course, partisan outlets around the globe. In more established democracies, however, there are generally also professional news organisations with high verification standards and these entities help define the bounds of what is accepted as accurate. Without a basis for knowing the facts, Meta will struggle – especially when under time pressure – to identify whether unverified rumours are providing important factual information or when they are fabrications aimed at heightening tensions. A few fact-checking initiatives, such as the aptly named Ethiopia Check, have sprung up. What is most needed, however, is a strong, autonomous domestic press that strives for accuracy and objectivity. Of course, the longstanding dominant authoritarian political culture in Ethiopia that often punishes dissent and discourages debate is an impediment to the creation of an independent media. Meta therefore faces a particular challenge in places like Ethiopia: the country lacks an independent press and its absence makes people rely heavily on social media for information.

A Way Forward

Preserving social media’s crucial role in providing information about conflict in Ethiopia while improving the quality of that information will require changes to happen on- and offline. In the online sphere, Meta should continue to strengthen content moderation and reduce the reach of potentially inflammatory content. Meta should expand its language capacities, both by increasing the number of moderators and improving its hate speech algorithm so that inflammatory posts are more readily demoted. Meta should also expand consultations with (and where needed increase support to) organisations that can help define standards for hate speech, including content that targets women and girls. But while the company says it will maintain its content moderation operations, for now there are reasons for concern about how that pledge will hold up. A third-party contractor shut the Nairobi hub in January 2023 and Meta has shed jobs globally amid reduced revenues.

Transparency is also important. Two years ago, leaks from inside the company revealed that Ethiopia ranked high in Meta’s tiering system. To this day, nobody outside Meta knows what steps it took to reduce the online circulation of problematic content as a result of that determination. The Oversight Board was told that improvements to the algorithm were made and that human moderators and language classifiers were hired. But they cannot confirm numbers, data or even the number of takedowns. As a result, the extent of these investments, their timeline and their impact cannot be assessed independently. At the very least, the company should give researchers and the Oversight Board greater access to data and information that would allow them to evaluate whether the interventions Facebook has taken are effective, and offer recommendations for how they could be more so. Such moves would also help increase trust in Meta.  

Still, neither content moderation nor algorithmic adjustments nor other changes to community standards will by themselves address the many ways in which online content can contribute to violence. Responsible management of those risks requires offline changes as well. Strengthening traditional local media, including through developing less partisan news sources, is crucial for the flow of reliable on-the-ground information – including on Facebook.

Meta can promote and, where appropriate, support local reporters, fact-checkers and citizen journalists who report in a relatively unbiased way.

While the development of such media depends mostly on local factors, Meta has a supporting role to play in this effort. In the U.S., Meta’s promotion of “trusted” and local news sources – or “high-quality reporting” – has been central to its goals of reducing polarising content. This is, of course, highly dependent on independent sources being identifiable and available, and the Ethiopian government’s repressive policies over the course of decades have discouraged their proliferation and growth. That said, Meta can promote and, where appropriate, support local reporters, fact-checkers and citizen journalists who report in a relatively unbiased way. Boosting their accounts and providing technical and material aid can improve the quality of news online. Identifying these individuals will require consultations with civil society groups across the country. In Ethiopia, government support for a free press, rather than continued criminalisation of dissent, will be an essential element of improving coverage. Some may see Ethiopia’s sweeping 2020 hate speech law as a means of addressing online incitement, but it has failed so far to have a noticeable impact. In fact, it may well also discourage legitimate dissent.

Sadly, there is no silver bullet for Ethiopia’s deep-seated political violence, and that also applies to social media’s role in it. But efforts to reduce the harm caused by online content combined with broader efforts to bolster the traditional media may create an improved online environment that does more to curb inflammatory discourse.

Crisis Group previously was a partner of Meta, and in that capacity was in contact with the company regarding misinformation on Facebook that could provoke deadly violence. Crisis Group reached out to the company to comment on this piece. It responded that it has strict rules outlining what is and is not allowed on Facebook and Instagram. It went on to say that hate speech and incitement to violence are against these rules and that it invests heavily in teams and technology to help find and remove this content. In Ethiopia, specifically, it said it receives feedback from local civil society organisations and international institutions and employs staff with local expertise who focus on the most widely spoken languages in the country, including Amharic, Oromo, Somali and Tigrinya.

Contributors

Senior Analyst, Ethiopia
wdavison10
Former Senior Analyst, Social Media & Conflict
JaneEsberg
Senior Analyst, Social Media and Conflict
ale_accorsi

Subscribe to Crisis Group’s Email Updates

Receive the best source of conflict analysis right in your inbox.