The torrent of racist abuse targeted at black England footballers in the wake of the team’s loss in the Euro 2020 final on Sunday has reignited the national debate over the policing of social media platforms, with Boris Johnson urging tech bosses on Tuesday to “up their game”.
On Tuesday evening, Johnson met managers from platforms including Facebook and Twitter in Downing Street for talks that focused on the issue of racist abuse in the fallout from the England football team’s defeat to Italy.
Johnson’s spokesperson said the prime minister told the cabinet earlier in the day that the abuse of the black players who failed to score penalties during the final — Marcus Rashford, Jadon Sancho and Bukayo Saka — was “utterly disgraceful and had emerged from the dark spaces of the internet”.
However, Johnson continued to face criticism that he and fellow ministers gave cover to racist abuse of the England team after they initially failed to condemn fans who booed players “taking the knee”, an anti-racist gesture.
The Metropolitan Police has said it is investigating the social media abuse. Facebook said it was working with the authorities.
The controversy comes amid growing calls from MPs for platforms to address online anonymity ahead of introducing the UK’s Online Safety Bill in parliament this autumn.
The bill, which aims to crack down on hate speech and bullying, “doesn’t recognise anonymity as a problem”, said Siobhan Baillie, Conservative MP for Stroud. Baillie has authored a new bill that would require social media platforms to provide a voluntary identity verification service.
“If anonymous abuse isn’t tackled in this bill, it will quite quickly look out of date to the public in future,” she added.
The proposals were given momentum by more than 640,000 signatories to an online petition on the UK government’s site, entitled: “Make verified ID a requirement for opening a social media account.” The petition had already passed the 100,000 threshold required to secure a debate by MPs in parliament this spring but hundreds of thousands more people signed it as public anger built over the latest racist attacks.
The ID verification bill would be introduced as an amendment to the draft Online Safety Bill later this year. Supported by a dozen MPs, it requires platforms such as Twitter, Facebook and Instagram to allow users to voluntarily verify their identity, meaning that feeds can be filtered to show only those accounts marked as legitimate.
Currently, the Online Safety Bill includes a new “duty of care”, with social media sites required to take robust measures to tackle illegal abuse, including swift and effective action against hate crime, harassment and threats directed at individuals.
It gives Ofcom, the regulator, the power to fine companies failing in this duty of care up to £18m, or 10 per cent of global turnover, whichever is higher. It could also block access to sites.
“We will not impose a blanket ban on anonymity online because for some groups such as people exploring their sexuality or suffering from domestic abuse it is important,” said a spokesperson from the Department for Digital, Culture, Media and Sport.
“However, all social media companies will have to meet their duty of care, which will mean stopping repeat offenders from opening new accounts and working with the authorities to make it easier to find people who set up accounts anonymously to abuse others.”
Sanjay Bhandari, chair of the Football Association’s Kick It Out, the anti-discrimination group, said there was a “balancing exercise” between protecting free speech and preventing harassment.
“At the moment my fear is [the Online Safety Bill] is being watered down on the basis of freedom of speech and actually many of the kinds of emojis and abuse we saw at the weekend would not be criminalised under the . . . bill,” he added.
The racism row has also revived concerns over the effectiveness of content moderation on the major platforms. Facebook, Instagram and Twitter all use a combination of human moderators and algorithmic filtering to stem abuse on their apps.
Twitter said it had taken down more than 1,000 messages and permanently suspended a number of accounts for violating its rules, and that the majority had been taken down proactively, before they had been reported.
Facebook said that a dedicated moderation team was tasked with monitoring posts related to the Euro 2020 final. However, the team does not filter comments pre-emptively before they appear online, even in high-risk or high-profile situations such as Sunday’s match, where only a small number of accounts are targeted.
Facebook’s automated systems notably failed to detect certain kinds of racial abuse, including emoji symbols, which tend to be used more obliquely than words and so are often difficult for algorithms to contextualise.
As recently as this month, Instagram has rolled out new tools for users in the UK allowing them to block any direct messages or comments on their posts that contain certain words. However, it is unclear whether the England players targeted in recent days were using the new system.
Tracy Chou, founder of Block Party, a start-up building tools to tackle online abuse, said she was “not surprised” that Facebook and Twitter were still struggling with the problem.
“This is an indictment of years of under-investment in these kinds of issues to solve it more from a product and systems level,” she added. “One of the flaws in the design of these online safety systems is they put the full burden of the abuse on the person being targeted by it, which doesn’t work when you have such an inundation.”
Facebook said: “We believe the steps we’ve recently announced and the work we do with the police, help to hold people accountable for their actions . . . while avoiding these risks.”
Additional reporting by Samuel Agini