Plan for UN outlines how social media firms can help prevent ethnic violence | Social media

Social media platforms will be urged to protect minorities and help prevent ethnic violence by hiring non-English language moderators and conducting safety tests on their algorithms, under proposals for a UN global code of conduct.

A British trio whose work has influenced the regulatory framework behind the online safety bill in the UK has sent a detailed plan for tackling toxic content on social media and video platforms to a UN official drawing up anti-online hate guidelines.

Last month, a Facebook whistleblower claimed her former employer was “literally fanning ethnic violence” in countries including Ethiopia because the company was not policing its service adequately outside the US. In testimony to US lawmakers, Frances Haugen said that although only 9% of Facebook users speak English, 87% of the platform’s misinformation spending is devoted to English speakers.

“Frances Haugen has raised the issue of weak systems and processes at social media companies causing harm,” said William Perrin, a trustee of the Carnegie UK Trust charity and co-author of the proposals.

“This guidance provides a way of strengthening this process and reducing harm around the world. We have particularly picked up Haugen’s point about linguistic capacity: there should be adequate numbers of people at these companies who understand what is actually happening on these platforms in order to prevent hate speech harming minorities.”

The proposals are being considered by the UN’s special rapporteur on minority issues, Fernand de Varennes, who is drafting guidelines on combating hate speech on social media that targets minorities. The guidelines will be submitted to the UN human rights council.

The Carnegie UK Trust submission states: “Social media service providers should have in place sufficient numbers of moderators, proportionate to the service provider size and growth and to the risk of harm, who are able to review harmful and illegal hate speech and who are themselves appropriately supported and safeguarded.”

It adds that moderators should be “trained in their specialist subjects and on related language and cultural context considerations”.

Facebook says it has strict rules against hate speech, has 15,000 people reviewing content in more than 70 languages across the world and has taken action to improve its content moderation in Myanmar, where the company admitted the platform had hosted hate speech directed at the Rohingya, the country’s Muslim minority.

Earlier this month, Facebook removed a post by Ethiopia’s prime minister, Abiy Ahmed, for “inciting and supporting violence”.

The other Carnegie proposals include: asking chief executives to make public statements committing their organisations to combating hate speech against minorities; companies conducting safety tests for algorithms that check how they impact different markets, cultures and languages; having a point of contact with local law enforcement agencies in order to report illegal content; and, using one of the key features of the online safety bill, asking tech firms to draw up risk assessments showing how their platforms could contribute to distributing hate speech.

Discussing the necessity of risk assessments, Lorna Woods, another co-author and a professor at the school of law and human rights centre at the University of Essex, said: “Look for the problems – don’t just assume that it’s alright.”