Meta

Community Standards Enforcement Report, Fourth Quarter 2020

Today we’re publishing our Community Standards Enforcement Report for the fourth quarter of 2020. This report provides metrics on how we enforced our policies from October through December, including metrics across 12 policies on Facebook and 10 on Instagram. 

Enforcement Action Highlights

Last quarter, we shared the prevalence of hate speech on Facebook for the first time to show the percentage of times people see this type of content on our platform. This quarter, hate speech prevalence dropped from 0.10-0.11% to 0.07-0.08%, or 7 to 8 views of hate speech for every 10,000 views of content. The prevalence of violent and graphic content also dropped from 0.07% to 0.05% and adult nudity content dropped from 0.05-0.06% to 0.03-0.04%.  

Our improvements in prevalence are mainly due to changes we made to reduce problematic content in News Feed. Each post is ranked by processes that take into account a combination of integrity signals, such as how likely a piece of content is to violate our policies, as well as signals we receive from people, such as from surveys or actions they take on our platform like hiding or reporting posts. Improving how we use these signals helps tailor News Feed to each individual’s preferences, and also reduces the number of times we display posts that later may be determined to violate our policies.

Our proactive rate, the percentage of content we took action on that we found before a user reported it to us, improved in certain problem areas, most notably bullying and harassment. Our proactive rate for bullying and harassment went from 26% in Q3 to 49% in Q4 on Facebook, and 55% to 80% on Instagram. Improvements to our AI in areas where nuance and context are essential, such as hate speech or bullying and harassment, helped us better scale our efforts to keep people safe.

We’re slowly continuing to regain our content review workforce globally, though we anticipate our ability to review content will be impacted by COVID-19 until a vaccine is widely available. With limited capacity, we prioritize the most harmful content for our teams to review, such as suicide and self-injury content.

On Facebook in Q4 we took action on:

On Instagram in Q4 we took action on:

2021 Roadmap

This year, we plan to share additional metrics on Instagram and add new policy categories on Facebook. We’re also working to make our enforcement data easier for people to understand by making these reports more interactive. Our goal is to lead the technology industry in transparency, and we’ll continue to share more enforcement metrics as part of this effort. We also believe that no company should grade its own homework. Last year, we committed to undertaking an independent, third-party audit of our content moderation systems to validate the numbers we publish, and we’ll begin this process this year. 

We will continue building on this progress and improving our technology and enforcement efforts to keep harmful content off of our apps.