Meta

Community Standards Enforcement Report, November 2020

By Guy Rosen, VP of Integrity

Today we’re publishing our Community Standards Enforcement Report for the third quarter of 2020. This report provides metrics on how we enforced our policies from July through September and includes metrics across 12 policies on Facebook and 10 policies on Instagram. 

What’s New: Hate Speech Prevalence

For the first time, we’re including the prevalence of hate speech on Facebook globally. In Q3 2020, hate speech prevalence was 0.10% – 0.11% or 10 to 11 views of hate speech for every 10,000 views of content. Due to our investments in AI, we have been able to remove more hate speech and find more of it proactively before users report it to us. Our enforcement metrics this quarter, including how much hate speech content we found proactively and how much content we took action on, indicate that we’re making progress catching harmful content. Prevalence, on the other hand, estimates the percentage of times people see violating content on our platform. Read more about our work on hate speech.

Enforcement Action Highlights

While the COVID-19 pandemic continues to disrupt our content review workforce, we are seeing some enforcement metrics return to pre-pandemic levels. Our proactive detection rates for violating content are up from Q2 across most policies, due to improvements in AI and expanding our detection technologies to more languages. Even with a reduced review capacity, we still prioritize the most sensitive content for people to review, which includes areas like suicide and self-injury and child nudity.

On Facebook in Q3, we took action on:

  • 22.1 million pieces of hate speech content, about 95% of which was proactively identified
  • 19.2 million pieces of violent and graphic content (up from 15 million in Q2)
  • 12.4 million pieces of child nudity and sexual exploitation content (up from 9.5 million in Q2)
  • 3.5 million pieces of bullying and harassment content (up from 2.4 million in Q2). 

On Instagram in Q3, we took action on:

  • 6.5 million pieces of hate speech content (up from 3.2 million in Q2), about 95% of which was proactively identified (up from about 85% in Q2)
  • 4.1 million pieces of violent and graphic content (up from 3.1 million in Q2)
  • 1.0 million pieces of child nudity and sexual exploitation content (up from 481,000 in Q2)
  • 2.6 million pieces of bullying and harassment content (up from 2.3 million in Q2)
  • 1.3 million pieces of suicide and self-injury (up from 277,400 in Q2)

The increase in our proactive detection rate for hate speech on Instagram was driven in part by improving our proactive detection technology for English, Arabic and Spanish languages, and expanding automation technology. We expect fluctuations in these numbers as we continue to adapt to COVID-related workforce challenges. 

Sharing Additional Policies Publicly

Today we are updating our Community Standards website to include additional policies that require more context and can’t always be applied at scale. These policies often require specialized teams to gather more information on a given issue in order to make decisions. For example, we require additional information, typically from a family member, before removing a deceased person’s account. We may also use additional information from trusted partners with local expertise or public news sources to help us make enforcement decisions for these nuanced policies. 

Several of these policies have been announced before. For example, our policy that prohibits posting misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm, and our policy to add a warning label to sensitive content such as imagery posted by a news agency that depicts child nudity in the context of famine, genocide, war crimes or crimes against humanity. While these policies are not new, we are sharing more details today to be even more transparent about our enforcement practices. Moving forward, just as we do with our scaled policies, we will continue to publicly update the Community Standards monthly as new policies are developed that require additional context.

We’ll continue improving our technology and enforcement efforts to remove harmful content from our platform and keep people safe while using our apps.

The Community Standards Enforcement Report is published in conjunction with our biannual Transparency Report that shares numbers on government requests for user data, content restrictions based on local law, intellectual property takedowns and internet disruptions.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy