From Asmara to Abuja, Sudan to South Africa, digital disinformation is becoming an increasingly common feature of Africa’s domestic political landscape.
These efforts adopt sophisticated tactics first deployed in Africa by foreign actors such as Russia and Saudi Arabia. The distorting and distracting impact of digital disinformation is making it increasingly difficult for the African public to discern between facts and “fake news” while following political, social, and security developments across the continent.
The resulting deterioration of trust and truth online has softened the ground for further conspiracy theories and fabricated content to take root in a murky information environment. Disinformation inhibits informed decisions on issues affecting Africans’ daily lives such as whether to receive a vaccination or whether to participate in the political process. Ultimately, this is the intent of disinformation in its most malicious form—to sow fear and confusion to advance the political purposes of those plying these falsehoods.
The Africa Center for Strategic Studies spoke to Tessa Knight, a South Africa-based researcher with the Atlantic Council’s Digital Forensic Research Lab (DFRLab), about these trends and measures to mitigate their effects.
* * *
You have researched a range of disinformation campaigns in Uganda, the Democratic Republic of the Congo, Ethiopia, Eritrea, South Africa, and Sudan—who was behind these campaigns?
There were variety of actors behind the examples of African disinformation I investigated. No two cases were the same, but most of these examples were ultimately connected to domestic governments or political parties.
Prior to Uganda’s January 2021 election, a network of inauthentic social media accounts operating on Facebook, Instagram, and Twitter spread coordinated disinformation in support of the ruling party. Some of these accounts were directly operated by the Ugandan government through the Government Citizens Interaction Center (GCIC) at the Ministry of Information and Communications Technology and National Guidance. The DFRLab identified at least five user profiles associated with GCIC that were removed during Facebook’s January 8, 2021, takedown of the network.
Several of the inauthentic accounts were also traced to a spokesperson for President Museveni’s son, Muhoozi Kainerugaba, a lieutenant general in the Ugandan military (and Commander of the Land Forces of the Uganda People’s Defence Force). Other accounts operating within the Ugandan disinformation network were tied to groups that claimed to be public relations firms or news organizations.
In the Democratic Republic of the Congo (DRC), we eventually traced an inauthentic social media network we had uncovered while investigating COVID disinformation back to a group of young people from the University of Kinshasa. This network’s content had gained an online following and was subsequently—and deceptively—rebranded to promote a national politician named Honoré Mvula and his political organization, Force des Patriotes. Honoré Mvula had ties to the young men behind the accounts, which we were able to document through photographs showing them together at events in Kinshasa.
“A lot of people are not aware of the scale of disinformation that is happening in Africa and how much it is distorting information networks.”
Our investigation into coordinated social media influence in Ethiopia showed that members of the international diaspora were behind widespread Twitter campaigns advancing competing narratives related to the conflict in Tigray despite an absence of independently verifiable information. Accounts tweeting in support of the government were amplified by members of Prime Minister Abiy’s government.
Relatedly, propaganda on Eritrea’s involvement in Tigray was created and spread by an organization called the New Africa Institute (NAI), claiming to be an NGO, and amplified by the Eritrean Minister of Information and several Ethiopian embassies. Simon Tesfamariam ran NAI—he lives in America but is an outspoken supporter of the ruling People’s Front for Democracy and Justice party in Eritrea. He has spoken at conferences on the Eritrean government’s behalf and is close to Eritrean government officials.
The disinformation network in Sudan that we investigated was connected to the Russian oligarch Yevgeny Prigozhin and Russia’s Internet Research Agency. However, only 2 of the 30 pages in the network that Facebook removed were operated from Russia. Most were run from Sudan and may have been domestically operated by local Sudanese citizens contracted by Prigozhin. Facebook found that this disinformation campaign had ties to the Prigozhin-linked firms Facebook had removed in 2019.
How did these respective campaigns operate?
In Uganda, the network behind the coordinated inauthentic content was divided into three clusters:
- Posts in support of President Museveni and the ruling National Resistance Movement (NRM) party
- False information spread about presidential candidate Robert Kyagulanyi (Bobi Wine) and his supporters
- Promotion of Museveni’s son as a future presidential candidate
Each cluster created fake and duplicate Facebook, Instagram, and Twitter accounts that spread disparaging disinformation about opponents through misleading images, claims, and hashtags—such as exploiting homophobic sentiment in Uganda by asserting Bobi Wine is gay. The clusters amplified one another’s inauthentic content by copying and pasting posts and hashtags across dozens of pages and groups created by the fake user accounts. Facebook pages for a fake news outlet (“Kampala Times”) and a recently set-up public relations firm (“Robusto Communications”) were part of these efforts, which helped give the false content a veneer of legitimacy. Ultimately, on January 8, 2021, Facebook removed 32 pages, 230 user accounts, 59 groups, and 159 Instagram profiles for violating their policy against government interference. Twitter removed accounts connected to this network several days later.
The students at the University of Kinshasa in the DRC created a number of fake social media profiles. Facebook ultimately removed a network of 66 user accounts, 63 pages, 5 groups and 25 Instagram accounts linked to Honoré Mvula and his political organization. We discovered that 26 of these pages and 5 groups were directly connected to 6 university students. They told investigators that they had initially created these pages “pour le buzz,” competing with one another over who could gain the most followers and likes by posting sensational and false information.
The posts on these pages followed a similar format, often beginning by declaring the post “urgent” or “flash news!!” The posts ranged from spreading the hoax that all schools in the DRC were closed for the year to conspiracy theories about Bill Gates and unsafe COVID-19 vaccination programs in Africa. Eventually, as their fake content gained a large following, they changed the names of the pages they ran—a majority of the 51 pages that we analyzed were renamed as part of efforts to rebrand them as news outlets or as the official pages of politicians or political parties. Once they were rebranded, they continued spreading sensational falsehoods but switched to promoting domestic political disinformation.
The Twitter campaigns around the conflict in Tigray fell in a gray area blurring the line between aggressive online activism and coordinated disinformation. Both supporters and opponents of the government in the diaspora set up websites that provided detailed instructions for creating Twitter accounts and then supplied pre-written text and hashtags that users were directed to copy and tweet. This is called a “click-to-tweet” campaign. Additionally, the Twitter handles of NGOs, journalists, and international politicians were provided, and members of the diaspora were instructed to tag them. Most of the pre-written messages were in English.
In a case of producing and disseminating a conspiracy theory online, NAI’s reports used the language and tropes of “fact-checking” to categorically claim that any evidence of Eritrean abuses in Tigray were the result of a plot by a network of Western officials and reporters working with the Tigray People’s Liberation Front (TPLF) to smear Eritrea. Unlike the other campaigns we researched, NAI posted a 71-page report titled “Disinformation in Tigray: Manufacturing Consent for a Secessionist War” on the popular self-publishing website Medium. This report claimed to be correcting the record on Tigray but misleadingly mixed unsubstantiated claims with verifiable facts to refute reports of Eritrean military abuses in Tigray. It used footnotes that referenced general statements about Eritrean history and culture—like the blanket assertion that gang rape is unheard of in Eritrea—to provide the appearance of academic rigor. This report has been amplified by Eritrean and Ethiopian officials’ and others’ social media accounts.
In Sudan, the network connected to Prigozhin set up 83 fake Facebook accounts, 30 pages, 6 groups, and 49 Instagram profiles. They used the fake accounts to post, comment, and “like” their own content as well as manage inauthentic pages—many which claimed to be of Sudanese politicians or media organizations. The accounts often used sophisticated techniques to generate fake profile pictures. They hired locals in Sudan to run many of these accounts to make their inauthenticity less detectable and harder to trace back to Russia. This was similar to the “franchising” method that Stanford researchers detected in Sudan in 2019. They posted in Arabic (sometimes sloppily copying and pasting poorly translated content). Many of their posts were positive stories about Russia and Prigozhin’s presence in Sudan, including a lot of attention to aid packages labelled “from Russia with love” distributed by Prigozhin in Sudan. Other stories focused on the benefits of Sudan hosting a Russian military base in Port Sudan.
What were the objectives of these campaigns?
In the lead up to the January 2021 presidential election in Uganda, press freedom was severely curtailed. So, opposition politicians, especially the youth movement led by Bobi Wine’s National Unity Platform (NUP), had turned to social media to share information and to provide organizing updates on the election. The ruling party disinformation network we uncovered tried to undermine the opposition’s social media efforts and the growing popularity of the NUP by denigrating Bobi Wine and posting in support of Museveni.
In the DRC, once the fake accounts and pages were rebranded from being “clickbait” to seemingly legitimate news organizations and political pages, they spread flattering stories in support of Honoré Mvula. This included false claims that Martin Fayulu, the main opposition candidate in the 2018 presidential election, was calling for the impeachment of his one-time rival, President Félix Tshisekedi, once the latter took office.
The context for the Ethiopian “click-to-tweet” campaigns was the fact that there was an information blackout in Tigray after the government cut off the internet there and restricted the media’s access. This created an environment conducive to false information. The diaspora groups on both sides were attempting to dominate the narrative regarding what was occurring on the ground in Tigray. By using Twitter and by tagging influential users, they were seeking to shape the opinion of international actors despite neither side having an accurate or informed sense of what was actually unfolding in Tigray.
Eritrean and Ethiopian officials used NAI’s report as a form of traditional propaganda that they could spread through state-affiliated social media accounts. The intention of these officials seems to have been to utilize the report as an “independent” and supposedly “fact-checked” document from which they could create talking points and refer to when making claims about Eritrean involvement in Tigray. It allowed officials to flip international condemnations of Eritrean soldiers fighting in Tigray and committing atrocities and sexual violence to assertions that Western officials were the ones spreading disinformation.
We also uncovered audio allegedly encouraging officials in the Eritrean Embassy in Washington to cite Simon Tesfamariam’s work and the NAI Tigray report:
“Our comrade Simon Tesfamariam, you have seen him giving an explanation in different incidents in our defense. He prepared this strong research paper, that disproves all the issues that are raised, makes them questionable, and defends them. Again, I mentioned his name only for encouragement. The article was released by the New Africa Institute. The reason that it was released by the New Africa Institute is that for our tactic to fully hit its target, one of the entries should be like this. For our work, both institutionally and individually, to be successful, we should establish our research centers, or our citizens must create the content.” [Translated from Tigrinya]
The aim of the Prigozhin disinformation network in Sudan was to promote Russia and Prigozhin as friends of Sudan. It also sought to support Sudan’s Rapid Support Forces (led by General Mohamed Hamdan Dagalo, Deputy Chairman of the Sovereignty Council) and to portray Sudan’s other transitional leaders as pawns of the United States, going so far as to suggest that things were better for the Sudanese people under the dictatorship of Omar al-Bashir. The disinformation messaging was always critical of Sudanese Prime Minister Abdalla Hamdok and became critical of Lieutenant General Abdel Fattah Abdelrahman al-Burhan, Chairman of the Sovereign Council, after plans were put on hold to permit Russia to build a naval base in Port Sudan.
What is your assessment of the outcomes and impact of these campaigns?
In Uganda, the government-connected disinformation campaign had a relatively large impact. While it is hard to say how this disinformation may have affected the disputed election, the reach of the inauthentic content was widespread on social media in the leadup to the vote. Public relations firms that were part of the network had a combined following of over 10,000 accounts. Facebook’s removal of the network’s disinformation campaign, in turn, provoked the Museveni regime to shut down the internet and suspend Facebook, Whatsapp, Instagram, and Twitter days before the election. Since many Ugandans receive their news via social media, this had a major impact on their access to information at a critical time. Don Wanyama, Senior Press Secretary to President Museveni, subsequently demanded that the Uganda Communications Commission investigate Facebook, which remained inaccessible in Uganda for 6 months after the election. The government also blocked virtual private networks (VPNs) that could be used to bypass the ban.
“In Uganda, the government-connected disinformation campaign had a relatively large impact. … the reach of the inauthentic content was widespread on social media in the leadup to the vote.”
The fake pages that Facebook took down in the DRC had amassed 1.5 million “likes,” so they clearly were reaching a lot of people. They illustrate how domestic politicians can capitalize on disinformation’s reach by deceptively taking over and rebranding pages that have gained a wide following and using them for political ends.
The case of the Ethiopian diaspora setting up “click-to-tweet” campaigns meant that there was a lot of inaccurate and unverified information that journalists, politicians, and policymakers were exposed to. Another result was that Twitter use surpassed Facebook use within Ethiopia during these campaigns as domestic users located outside Tigray (where there was an internet blackout) began using Twitter to consume and spread pro-government narratives that had originated in the diaspora. This example illustrates the fine line between coordinated activism (as was seen last year in Nigeria’s #EndSARS protests) and coordinated disinformation. It is a question of when hyperbole and unverified claims cross over into intentional and misleading falsehoods.
It is difficult to quantify the impact of the NAI report by Simon Tesfamariam, although the report reached a mass audience through comedian and actress Tiffany Haddish (who was granted Eritrean citizenship in 2019). Her amplification resulted in NAI being represented on talk shows and in the media. The fact that Eritrean and Ethiopian officials have seized on the NAI efforts and encouraged more like them illustrates how fact-checking tropes can be leveraged to misleadingly “fact-check the fact-checkers,” creating ammunition for opaque governments to counter accusations that are backed by actual evidence.
The disinformation network we observed in Sudan gained a following of over 440,000 accounts. So, it seemingly reached a wide audience, even if its posts were often sloppily translated. Nonetheless, it is difficult to assess the impact of this campaign.
What was the relationship, if any, between these domestic campaigns and internationally sponsored disinformation in Africa?
The attempt to disguise an internationally (i.e., Russian) sponsored disinformation campaign through hired domestic operators was clear in the case of Sudan. This example and the cases of the diaspora’s involvement in trying to shape the narrative around the conflict in Tigray were the only places, in the cases we studied, where we detected an international connection.
What similarities and differences in terms of scale, techniques, and impact did you detect between these campaigns and those conducted by international actors in Africa?
I have a lot of people who message me wanting to know what Russia and China are doing in Africa. Well, what about what African countries are doing in Africa? I think oftentimes people assume that African countries are not capable of sophisticated disinformation on social media. And that is vastly underestimating the ability of political parties and online actors on the continent and how far people are willing to go to push their agenda. There are hundreds of millions of Facebook users and Twitter users in Africa. Coordinated disinformation is certainly not something that is limited to Western or Asian countries.
I would say that the cases we investigated were pretty similar to international campaigns that have been uncovered in Africa in the past. The primary difference was that campaigns that originated domestically or through diaspora members had the advantage of knowing the language and culture of the countries they were targeting. In cases that the DFRLab has detected, Russia has really struggled to make its disinformation look authentic in African countries even though they have paid local actors, such as in Sudan, to translate and post content. In the cases of the DRC and Uganda, domestic actors had an advantage because they know the context, they know the language, and they know a lot of the political nuance. They are able to create authentic content more seamlessly.
Are the campaigns you researched reflective of the actors typically responsible for domestic disinformation in Africa?
All the cases I investigated involved political parties or regimes producing or benefitting from digital disinformation. But this is likely influenced by the fact that I’ve been tracking African elections and African political and social unrest. Most of the disinformation that I have dug into has had an underlying political agenda.
Are the campaigns you researched relatively unique within Africa’s digital spaces or are they emblematic of an expanding number of coordinated schemes to promote inauthentic content within the continent? Are there certain regions and countries more impacted than others?
I suspect that the coordinated disinformation we have uncovered is just the tip of the iceberg and that it is expanding as governments and political figures learn to manipulate social media algorithms through fake, duplicated, and coordinated content production.
“Every time I have set out to search for coordinated disinformation in advance of an election or around conflicts, I have found it.”
Every time I have set out to search for coordinated disinformation in advance of an election or around conflicts, I have found it. I have not investigated an online space in Africa and not found disinformation.
I think a lot of people are not aware of the scale of disinformation that is happening in Africa and how much it is distorting information networks. My colleagues have investigated other campaigns run from Tunisia (targeting Tunisia, Togo, and Côte d’Ivoire’s elections), Guinea, Kenya, and South Africa.
What questions do you still have about these campaigns and disinformation in Africa more generally?
I think learning from social media companies the identities of government officials linked to disinformation campaigns in places like Uganda would help answer outstanding questions we have about domestic disinformation in Africa, how it is being produced, and who is directing it.
What successful efforts have you seen to monitor, prevent, and remove inauthentic and inaccurate content in African digital spaces?
There are a number of fact-checking organizations now operating across Africa such as Africa Check, Pesa Check, Media Monitoring Africa, and iLab that work together and with Facebook to monitor and debunk some of the most egregious and widespread disinformation online. Likewise, Facebook now automatically takes down some duplicate and clearly inauthentic accounts and pages. But, like I said, this is still likely the tip of the iceberg.
“There are more and more local organizations doing these kinds of investigations, with fantastic journalism and disinformation reporting coming out of the continent.”
There are very few of us who are digging into disinformation campaigns and seeking to actively uncover their origins, links, and reach. This kind of online sleuthing and whistleblowing can be highly sensitive in some places, so it is not happening everywhere. It also takes training and usually a relationship with Facebook or Twitter to peel back the layers of these campaigns with additional information that only the platforms have access to and can share at their own discretion.
I think the online sleuthing I and a few others are doing could be replicated and expanded. The Digital Forensic Research Lab is doing a lot of training along these lines through our 360/Digital Sherlocks program to help journalists, civil society members, and researchers on the continent learn the skills to expose and analyze digital disinformation networks.
Unfortunately, there is not a lot of interest or funding to investigate disinformation in African countries. But there are more and more local organizations doing these kinds of investigations, with fantastic journalism and disinformation reporting coming out of the continent. I am always in awe of the researchers and journalists who do this work in repressive environments, as it can be incredibly dangerous.
Do you have policy suggestions for how these initiatives can be assisted, scaled up, or replicated?
Social media companies need to invest more resources into Africa. They do not have a sufficient understanding of the African countries they are operating in and their varied information and political landscapes. This leads to bad decision making around disinformation and reactive rather than proactive practices. Likewise, researchers, civil society members, and reporters need more information from Facebook to understand these disinformation networks and who is ultimately behind them.
We also need more people who speak local African languages working for social media companies. You cannot have effective social media moderation in only a handful of languages. I understand this is quite difficult because in South Africa, for example, we have 11 official languages. But I do think more resources need to be invested into Africa when millions of Africans are online and social media companies clearly see the value of these markets.
It would be interesting if some of Africa’s larger democracies demanded that social media companies invest in monitoring disinformation and transparently share those findings as part of receiving permission to operate in their markets.
We really need the social media companies to step up their efforts because putting the onus on government regulation in Africa could backfire. We have seen when countries like Nigeria, Uganda, or even South Africa have tried to criminalize disinformation, it can be used to clamp down on dissent. Such laws, thus, can be used as a repressive tactic against opposition parties and members of civil society who are increasingly organizing and sharing information online as other outlets for freedom of press, assembly, and speech are shrinking.
- Africa Check Fact-Checking Tips
- Media Monitoring Africa KnowNews Dashboard
- The Economist, “A Growing Number of Governments Are Spreading Disinformation Online,” January 13, 2021.
- Africa Center for Strategic Studies, “A Light in Libya’s Fog of Disinformation,” Spotlight, October 9, 2020.
- Africa Center for Strategic Studies, “Russian Disinformation Campaigns Target Africa: An Interview with Dr. Shelby Grossman,” Spotlight, February 18, 2020.
- Kwami Ahiabenu, Gideon Ofusu-Peasah, and Jerry Sam, “Media Perspectives on Fake News in Ghana,” Penplusbytes, May 2018.
- Joseph Siegle, “Managing Volatility with the Expanded Access to Information in Fragile States,” Development in the Information Age, March 2013.
More on: Disinformation Democratic Republic of the Congo Disinformation Eritrea Ethiopia Russia Sudan Uganda