TikTok to Roll Out ‘Content Levels’ Rating System | #socialmedia | #children | #sextrafficing | #childsaftey

TikTok has faced lots of criticism for its alleged impacts on young users.

TikTok is the wild west of social media feeds (and they’re all kinds of a wild west). A scroll could start on a dance trend, jump to a clip of raw chicken ‘marinating’ in NyQuil, and end on video of someone filing their own teeth. It’s weird out there, and occasionally dangerous. Now TikTok is taking its next steps to rein things in.

The company announced a rating system called “Content Levels,” that it plans to institute an early version of “in the coming weeks,” in a Wednesday blog post. TikTok had indicated back in February that it was moving towards age-based feed restrictions, and Content Levels offers the first details of what that might look like. App users will also now have more control over their own video streams, with the ability to selectively mute hashtags.

Though the social media giant wrote that their new moderation scheme is based on the ones used by the film, TV, and gaming industries, the company won’t immediately be displaying ratings along with video clips. Instead, the sorting and filtering will happen on the back end.

“When we detect that a video contains mature or complex themes, for example, fictional scenes that may be too frightening or intense for younger audiences, a maturity score will be allocated to the video to help prevent those under 18 from viewing it across the TikTok experience,” wrote the company. “We have focused on further safeguarding the teen experience first and in the coming months we plan to add new functionality to provide detailed content filtering options for our entire community so they can enjoy more of what they love.”

Each “maturity score” will be, in theory, assigned by a TikTok moderator. Though, in the past, the company has mentioned the possibility of platform creators assigning a rating to their own content before posting. The company didn’t explain details of the maturity rating criteria, and did not immediately respond to Gizmodo’s request for comment.

TikTok emphasized in its announcement that the incoming content moderation system is early days. “We also acknowledge that what we’re striving to achieve is complex and we may make some mistakes,” the company wrote. But in the meantime—while we’re waiting for comprehensive top-down, age-based content filtering—app users can now create their own restrictions. Hashtags or words can now be muted in “For You” or “Following” feeds, so scrolls can be slightly more curated than they were before. The platform said that this, and more efforts to diversify recommended videos will also be coming in the next few weeks.

TikTok has had a meteoric rise, especially among teenagers and even younger children. In the first three months of 2022, it was the most downloaded app worldwide. Throughout its rocket journey to the top though, TikTok has faced lots of flack—both for its controversial and allegedly flawed privacy policies and for its impact on users.

The platform already has content guidelines, and bans specific categories of videos based on user reporting and employees tasked with sifting through posts. In March, two former TikTok moderators sued the company over trauma they say they incurred while working to filter out violent or otherwise inappropriate videos from the platform. The lawsuit claims that TikTok doesn’t provide adequate mental health services or protection to moderators. Which doesn’t necessarily bode well for a planned expansion of moderation across the app.

The company is also facing lawsuits from parents who claim their children were hurt or even killed because of content they saw on TikTok. In May, the mother of a 10-year-old girl sued the company after she said her daughter died of asphyxiation attempting a “Blackout Challenge,” popularized on the app. More parents filed similar lawsuits this month. New legislation in California could further allow parents to sue over claims of social media addiction.

It remains to be seen if the platform’s new content moderation efforts can make a dent in the issue of potentially dangerous, viral video trends.

Original Source link

Leave a Reply

Your email address will not be published.

− two = three