Today we’re publishing the sixth edition of our Community Standards Enforcement Report, our first quarterly update, providing metrics on how we enforced our policies from April 2020 through June 2020. This report includes metrics across 12 policies on Facebook and 10 policies on Instagram.
Due to the COVID-19 pandemic, we sent our content reviewers home in March to protect their health and safety and relied more heavily on our technology to help us review content. We’ve since brought many reviewers back online from home and, where it is safe, a smaller number into the office. We’ll continue using technology to prioritize the review of content that has the potential to cause the most harm. Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology.
For example, we rely heavily on people to review suicide and self-injury and child exploitative content, and help improve the technology that proactively finds and removes identical or near-identical content that violates these policies. With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram. Despite these decreases, we prioritized and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.
The number of appeals is also much lower in this report because we couldn’t always offer them. We let people know about this and if they felt we made a mistake, we still gave people the option to tell us they disagreed with our decision. We reviewed many of these instances, and restored content when appropriate. Lastly, because we’ve prioritized removing harmful content over measuring certain efforts during this time, we were unable to calculate the prevalence of violent and graphic content, and adult nudity and sexual activity. We anticipate we’ll be able to share these metrics for Q3 in our next report.
Despite the impact of COVID-19, improvements to our technology enabled us to take action on more content in some areas, and increase our proactive detection rate in others. Our proactive detection rate for hate speech on Facebook increased 6 points from 89% to 95%. In turn, the amount of content we took action on increased from 9.6 million in Q1 to 22.5 million in Q2. This is because we expanded some of our automation technology in Spanish, Arabic and Indonesian and made improvements to our English detection technology in Q1. In Q2, improvements to our automation capabilities helped us take action on more content in English, Spanish and Burmese. On Instagram, our proactive detection rate for hate speech increased 39 points from 45% to 84% and the amount of content we took action on increased from 808,900 in Q1 2020 to 3.3 million in Q2. These increases were driven by expanding our proactive detection technologies in English and Spanish.
Another area where we saw improvements due to our technology was terrorism content. On Facebook, the amount of content we took action on increased from 6.3 million in Q1 to 8.7 million in Q2. And thanks to both improvements in our technology and the return of some content reviewers, we saw increases in the amount of content we took action on connected to organized hate on Instagram and bullying and harassment on both Facebook and Instagram.
We’ve made progress combating hate on our apps, but we know we have more to do to ensure everyone feels comfortable using our services. That’s why we’ve established new inclusive teams and task forces, including the Instagram Equity Team and the Facebook Inclusive Product Council, to help us build products that are deliberately fair and inclusive, and we’re launching a Diversity Advisory Council that will provide input based on lived experience on a variety of topics and issues. We’re also updating our policies to more specifically account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people controlling the world. We also continued to prioritize the removal of content that violates our policy against hate groups. Since October 2019, we’ve conducted 14 strategic network disruptions to remove 23 different banned organizations, over half of which supported white supremacy.
We want people to be confident that the numbers we report around harmful content are accurate, so we will undergo an independent, third-party audit, starting in 2021, to validate the numbers we publish in our Community Standards Enforcement Report.
As the COVID-19 pandemic evolves, we’ll continue adapting our content review process and working to improve our technology and bring more reviewers back online.