Social Media Companies Aren’t So Hip to Help Stop Hate Online

Self-regulation on removing online hate speech from social media has not yet borne the fruits IT giants promised, according to new research.
Populism often walks hand in hand with xenophobia and hate speech – this made the pledge of the most important IT companies working in the field of social media earlier this year timely and valuable.

IT companies team up against hate

Under a code of conduct put in place by the European Commission, they promised to swiftly remove reported illegal online hate speech to prohibit the promotion of incitement to violence and hateful conduct.

Companies like the Facebook, Twitter, Google’s Youtube, Microsoft and others agreed on a valid removal notification to assess such requests against their rules and community guidelines and, where necessary, national laws transposing the Framework Decision on combating racism and xenophobia.The companies promised to act on valid removal requests in less than 24 hours and remove or disable access to such content, if necessary.

Disappointing results

Following their pledge in June 2016, the Amsterdam-based International Network Against Cyber Hate (INACH), in a coordinated monitoring exercise with a handful of European NGOs - most of them members of INACH - in the field, checked how much the signees act under the Code of Conduct. After a six-week monitoring period based on common methodology, they concluded that neither the timing nor the proportion of removed contents by social media services were showing good results.

Screen shot 2016-12-20 at 17

Initial results show that 28 percent of all notifications of alleged illegal online hate speech lead to the removal of the flagged content. However, only 40 percent of all notifications are currently reviewed in under 24 hours, while the aim of the code of conduct is to review the majority within 24 hours.

EU Commissioner for Justice Věra Jourová said: "The last weeks and months have shown that social media companies need to live up to their important role and take up their share of responsibility when it comes to phenomena like online radicalization, illegal hate speech or fake news. While IT companies are moving in the right direction, the first results show that the IT companies will need to do more to make it a success."

Companies 'should have done a lot better'

Screen shot 2016-12-20 at 17

The 12 NGOs based in nine EU countries participating in the research analyzed the responses to 600 notifications made in total. The percentage of take down/removal is as low as 28.2 percent.

Tamás Berecz, an analyst for INACH, told Liberties that "based on the research, the social media companies could have done a lot better and should have done a lot better. Their response came down to the usual promise to do better – while they tried defending their inaction with arguments for the freedom of speech and citing how labor intensive monitoring is."

Social media companies' unwillingness to provide the EC and the participating NGOs with their in-house data on online hate further complicates the process. INACH experts see only two possible reasons for the companies' failure to fulfill their obligations: either the issue is a matter of capacity, in which case the companies need to spend more on these tasks; or it’s a matter of knowledge, in which case better training for the review experts will be a hu.

Either way, there will be a second monitoring period in 2017 to assess progress and decide on next steps.

Liberties will be there to keep you posted!