Facebook has removed more than two hundred organizations that are linked to white supremacy, an ideology where people with a light skin color are believed to be superior. Blocking the groups is part of a modified Facebook policy against terrorism and hatred.
Previously, Facebook focused on the fight against terrorism on large organizations such as IS and Al Qaeda, but now Facebook also focuses on individuals and smaller hateful groups and organizations. As a result, more than two hundred white supremacy groups have now been removed.
Facebook uses a combination of artificial intelligence and employees who assess the content. In addition to groups, Facebook also removes messages expressing admiration or support for the groups.
Earlier this year, Facebook also announced that it would delete messages referring to white nationalism and white separatism.
Facebook tightens its policy to combat terrorist content after the attack on Christchurch, where the perpetrator streamed the terrorist act on Facebook.
In the blog, Facebook says that “the act showed the misuse of technology to spread radical ideas and showed that the recognition of and action against violent extremist content must improve”.
Facebook also says it has adjusted the definition of terrorist organizations. Until now, Facebook used a definition that focused on violence with a political or ideological motive. Violence aimed at civilians with the aim of intimidating is added.
After the attack in Christchurch, Facebook was criticized for the distribution of images of the attack on the social network. Facebook then imposed restrictions on who is allowed to stream live. But the company is now also collaborating with the American and British government to recognize such images more quickly from now on.
Before the attack, the algorithms that must recognize violent content were not trained on images from the first person, so-called first person shootings. Facebook now wants to use first person images from army training sessions to train the software, so that in the future first person images of real events will be recognized without blocking such images from movies and games.
Facebook comes with the update of the guidelines just before the company has to appear before the US Senate on Wednesday, September 18, along with Google and Twitter, in connection with concerns about the role of social media in mass shootings.