As the Israel-Palestine ceasefire agreement holds, social media platforms, including Facebook and Instagram (which is owned by Facebook), continue to come under fire over the censorship of pro-Palestinian content.
In times of conflict, any kind of censorship by major platforms can erase evidence of state-sanctioned violence, human rights abuses and potential war crimes against innocent civilians.
This is particularly concerning as evidence of brutality and violence on social media can often be the only form of testimony that holds false narratives and mass denial of human rights abuses to account.
Holding platforms to account
According to Al Jazeera, the Palestinian death toll in the recent conflict with Israel stands at 248, including 66 children. At least 12 people have died in Israel, including two children.
On social media, Palestinians and supporters have documented violence through images and videos, with hashtags in both English and Arabic. But activists, digital rights defenders and users have called out the platforms over mounting evidence of the unjustified removal of pro-Palestinian content.
Facebook last week acknowledged it had inaccurately labelled certain words commonly used by Palestinians online (including “martyr” and “resistance”) as incitement to violence.
Senior Facebook executives apologised to Palestinian Prime Minister Mohammad Shtayyeh in a virtual meeting on May 20, after which a Facebook spokesperson told TIME Facebook was “actively working to respond to concerns about our content enforcement.”
Earlier this month, Instagram and Facebook labelled Jerusalem’s Al-Aqsa Mosque as being associated with “violence or a terrorist organisation”, according to a BuzzFeed report. This resulted in Instagram removing and blocking posts tagged with #AlAqsa or its Arabic counterparts #الاقصى or #الأقصى. A Facebook spokesperson claimed the posts “were restricted in error”.
In a letter to the Palestinian Mission to the United Kingdom, Facebook said it would work to resolve content moderation issues and investigate alleged campaigns on the platform to incite violence against Palestinians in Israel.
When it comes to media reporting on Israel-Palestine, there is nowhere to hide
Meanwhile, The Intercept, a US news outlet, claimed on May 15 it had obtained internal Facebook policies which showed the company’s moderating of the term “zionist” enabled it to suppress criticism against the Israeli state on both Facebook and Instagram.
Due to a lack of transparency surrounding content moderation practices, it’s not clear how much platform censorship has occurred in the current conflict. What little independent insight we have mostly comes from a handful of digital rights organisations.
The global digital rights organisation Access Now has reported receiving hundreds of accounts of platforms suppressing pro-Palestinian content across Facebook, Twitter, TikTok and Instagram.
And the Palestinian digital rights group Sada Social recorded more than 200 violations of Palestinian social media content related to the Sheikh Jarrah demonstrations in occupied East Jerusalem. It described “violations” as:
[…] arbitrary measures against Palestinian content, especially with the tendency to stigmatise criticism of Zionism with anti-Semitism.
Similarly, the Palestinian and Arab digital rights organisation, 7amleh — The Arab Center for the Development of Social Media, documented some 500 cases of Palestinian digital rights violations between May 6-19, along with platforms’ responses. The violations came from Instagram (50%), Facebook (35%), Twitter (11%) and TikTok (1%).
7amleh alleged that in most cases, users were not given an explanation for content deletion or account suspension.
Meetings with Facebook executives
Sada Social released a statement last week accusing Israel of trying to “impose its hegemony on social media platforms”.
The statement came in response to a meeting between Israeli Defence Minister Benny Gantz and executives from Facebook and TikTok. In it, Gantz requested the corporations remove Palestinian content Israel believed would incite violence or spread misinformation.
Soon after, former Facebook executive Ashraf Zeitoon spoke to Al Jazeera Plus about historic and ongoing pressure by the Israeli government to censor Palestinian content. He said Facebook consistently complied with pro-Israeli allegations by systematically silencing Palestinian voices.
A Facebook spokesperson told Al Jazeera Plus:
This person hasn’t worked at Facebook in more than four years and has no direct knowledge of our decision-making processes during these horrible global events nor the authority to speak about our policies or how we enforce them.
Due to a lack of transparency surrounding content moderation, we don’t know the extent to which platform censorship is done in direct response to user complaints, requests from foreign governments or as a result of algorithmic decisions.
In response to growing criticism, Facebook last week established a “special operations centre” staffed by experts including native Arabic and Hebrew speakers.
The company, which has offices in Israel, has faced growing criticism by digital rights defenders and activists over its business interests, platform policies and content moderation process.
Facebook has a public policy director for Israel and the Jewish diaspora, Jordana Cutler, a former adviser to Israeli Prime Minister Benjamin Netanyahu.
It does not have a dedicated public policy director for Palestinians. Palestinian matters fall under the remit of its Middle East and North Africa policy chief.
Last year, Facebook established an independent oversight board to address growing criticisms about its role in suppressing online speech. But the legitimacy of the board was questioned after the controversial appointment of Emi Palmor, a former general director of the Israeli Ministry of Justice’s Cyber Unit.
The enactment of social and political power
Social media platforms can effectively dictate what is permitted in online discourse, using moderation processes that are opaque and fraught with inconsistencies.
In the context of the Israel-Palestine conflict, the removal of content deemed “offensive”, “graphic” or “inciteful” by private corporations and oppressive political actors essentially controls how we understand it.
This, in turn, can influence the protection of rights and potential to prosecute human rights abuses. This is a form of oppression I refer to as “platform necropolitics”.
It provides a prism through which to understand how corporate and political actors are increasingly working together to control which voices and what content are (or aren’t) given space online.