COVID Misinformation Blunder: Instagram Censors Widely Respected Cochrane Collaboration | #socialmedia


A desire to stop the spread of COVID-19 misinformation has fueled increasingly intense efforts to restrict speech on social media platforms. Major news outlets, communications researchers, and even some public health experts have jumped on this bandwagon, calling for a variety of policies designed to stem the proliferation of anti-vaccine messaging and other scientifically dubious content.

Perhaps the most alarming aspect of this campaign is its propensity to silence trustworthy sources of information right alongside nonsense pushers like Joe Mercola and Robert F. Kennedy, Jr. The British Medical Journal has documented multiple occurrences of this friendly fire throughout the pandemic. I also highlighted a particularly ridiculous example in this story: Social Media Censorship: Scientist Corrects Anti-GMO Silliness, Facebook Threatens To Ban Him.

But the most egregious case of social media giants silencing credible voices occurred last week when Instagram prohibited users from tagging the Cochrane Collaboration because the organization “repeatedly posted content that goes against our community guidelines on false content about COVID-19 or vaccines,” the platform wrote in a notification to users.

If you don’t appreciate how absurd Instagram’s decision was, consider that the Cochrane Collaboration is an international network of some 30,000 scientific experts that publishes high-quality reviews of the evidence on a variety of medical topics. To date, it has published more than 7,500 systematic reviews. The Canadian Medical Association has called Cochrane the “best single resource” of its kind. The World Health Organization maintains a partnership with the nonprofit because it contributes “significantly to the advancement of public health.” In sum, this is the equivalent of censoring the March of Dimes for spreading misinformation about birth defects—it’s preposterous.

Why censor Cochrane?

It’s not clear why Instagram would punish such a reputable organization. The group speculated on Twitter that its July 2021 review of ivermectin as a COVID-19 treatment could have triggered the controversy. If so, Instagram has some ‘splaining to do. Here’s the review’s conclusion:

Based on the current very low‐ to low‐certainty evidence, we are uncertain about the efficacy and safety of ivermectin used to treat or prevent COVID‐19. The completed studies are small and few are considered high quality. Several studies are underway that may produce clearer answers in review updates. Overall, the reliable evidence available does not support the use of ivermectin for treatment or prevention of COVID‐19 outside of well‐designed randomized trials. [my emphasis]

Cochrane also published a review in November 2020 documenting the limited efficacy of masking against respiratory viruses. Other reputable publications have reached the same conclusion, so there’s nothing especially egregious about that November review. Nonetheless, it’s possible that Instagram takes the simple-minded stance that universal masking is the “right” position and thus targeted Cochrane’s article as misinformation.

In either case, we’re talking about a scientific dispute based on limited evidence. Cochrane summarized the available data for experts and the general public; they didn’t endorse the use of ivermectin for COVID-19 nor reject the use of masks. Neither of these examples provides a logical basis for any censorship.

The usual objection

Perhaps this was just an oversight, someone will inevitably respond. The decision was probably automated; nobody at Instagram was dim enough to make this mistake in real-time. Maybe that’s the case, but how is that a satisfying answer? It means that some programmer at Instagram wrote an algorithm that can’t distinguish between Cochrane on the one hand and, say, Children’s Health Defense, a virulent anti-vaccine group, on the other. More importantly, the intention of a policy—censor disreputable voices—is no excuse for its indefensible results—oops, we flagged a premier medical organization with a 30-year history.

I think the better explanation is this: social media platforms are poor judges of what constitutes good science, especially as it relates to a novel virus that we’re learning more about almost constantly. Their community standards related to “misinformation” are influenced by current political trends and contain loopholes big enough—“We recognize that words mean different things or affect people differently”—to justify any policy the companies want to pursue.

With those considerations in mind, it’s clear that no amount of algorithmic fine-tuning will solve the problem. Social media platforms need to quit moderating content they know nothing about. 





Original Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

eighty six − seventy seven =