Very, Very Little Of ‘Content Moderation’ Has Anything To Do With Politics | #socialmedia


from the that’s-not-how-it-works dept

It’s truly amazing how focused people are, in discussions on content moderation, on the claims that “content moderation is censorship” and that it’s primarily “suppressing” political speech. That’s not how it works at all. Honestly, the origins of most content moderation efforts were around two major things: (1) spam prevention and (2) copyright infringement. Over time, that’s expanded, but the major categories of content moderation have little to nothing to do with “viewpoint” discrimination, no matter what Texas seems to think.

Casey Newton has a good post exploring Facebook’s latest transparency report where he highlights how little moderation has anything to do with people’s viewpoints.

An important thing to focus on, whether you’re an average user worried about censorship or recently bought a social network promising to allow almost all legal speech, is what kind of kind of speech Facebook removes. Very little of it is “political,” at least in the sense of “commentary about current events.” Instead, it’s posts related to drugs, guns, self-harm, sex and nudity, spam and fake accounts, and bullying and harassment.

To be sure, some of these categories are deeply enmeshed in politics — terrorism and “dangerous organizations,” for example, or what qualifies as hate speech. But for the most part, this report chronicles stuff that Facebook removes because it’s good for business. Over and over again, social products find that their usage shrinks when even a small percentage of the material they host includes spam, nudity, gore, or people harassing each other.

Usually social companies talk about their rules in terms of what they’re doing “to keep the community safe.” But the more existential purpose is to keep the community returning to the site at all.

I dug into some of the numbers, and if we just look at “content actioned” over the last couple years, it appears that spam is still the major focus. Facebook removed 1.8 billion pieces of content it judged as spam in just the fourth quarter of 2021. It also removed 1.6 billion “fake accounts” (Facebook requires accounts to be associated with real humans). You get to much smaller numbers for other categories, like 31 million pieces of content removed for “sexual activity,” 16.5 million pieces of content dealing with sexual exploitation, and another 2.1 million around “nudity and physical abuse” involving children. 16 million pieces of content dealt with for terrorism (which was way up). 26 million pieces of content were deemed problematic for “violent and graphic content.” 6.8 million were dealt with over “suicide and self-injury.” And 15 million for “hate speech.” Another 9.5 million were around “bullying and harassment.”

Even if you assume that some of the listed categories above were political, the numbers are still dwarfed by the spam and fake accounts issues that are the vast majority of content that Facebook’s moderators need to deal with. Putting this all in graphic form, you realize that content moderation is almost entirely about spam and (for Facebook) dealing with fake accounts. It is not, generally, about being “censors.” (Copyright seems to be part of a separate transparency report).

So, for everyone who insists that there should be no content moderation and that everything should flow, just recognize that most of what you’d be enabling is… spam. Lots and lots and lots of spam. Unfathomable amounts of spam.

To make this more explicit, I put all of the other categories together and made this chart:

So, yeah. You want content moderation. You need content moderation.

Content moderation is not about censoring political views.

Filed Under: content moderation, spam, transparency report

Companies: facebook



Original Source link

Leave a Reply

Your email address will not be published.

+ eighty two = ninety