Meta

How We Review Content

By Jeff King, Director Product Management, Integrity and Kate Gotimer, Director, Global Operations

When we began reviewing content on Facebook over a decade ago, our system relied on people to report things they saw and thought were inappropriate. Our teams then reviewed and removed individual pieces of content if they broke our rules. A lot has changed since then. We’ve developed more transparent rules, and we’re establishing an Oversight Board that will soon begin to review some of our most difficult content decisions. But the biggest change has been the role of technology in content moderation. As our Community Standards Enforcement Report shows, our technology to detect violating content is improving and playing a larger role in content review. Our technology helps us in three main areas: 

  • Proactive Detection: Artificial intelligence (AI) has improved to the point that it can detect violations across a wide variety of areas without relying on users to report content to Facebook, often with greater accuracy than reports from users. This helps us detect harmful content and prevent it from being seen by hundreds or thousands of people.
  • Automation: AI has also helped scale the work of our content reviewers. Our AI systems automate decisions for certain areas where content is highly likely to be violating. This helps scale content decisions without sacrificing accuracy so that our reviewers can focus on decisions where more expertise is needed to understand the context and nuances of a particular situation. Automation also makes it easier to take action on identical reports, so our teams don’t have to spend time reviewing the same things multiple times. These systems have become even more important during the COVID-19 pandemic with a largely remote content review workforce. 
  • Prioritization: Instead of simply looking at reported content in chronological order, our AI prioritizes the most critical content to be reviewed, whether it was reported to us or detected by our proactive systems. This ranking system prioritizes the content that is most harmful to users based on multiple factors such as virality, severity of harm and likelihood of violation. In an instance where our systems are near-certain that content is breaking our rules, it may remove it. Where there is less certainty it will prioritize the content for teams to review.  

Together, these three aspects of technology have transformed our content review process and greatly improved our ability to moderate content at scale. However, there are still areas where it’s critical for people to review. For example, discerning if someone is the target of bullying can be extremely nuanced and contextual. In addition, AI relies on a large amount of training data from reviews done by our teams in order to identify meaningful patterns of behavior and find potentially violating content. 

That’s why our content review system needs both people and technology to be successful. Our teams focus on cases where it’s essential to have people review and we leverage technology to help us scale our efforts in areas where it can be most effective. 

Using Technology More

For reviewing violations like spam, we’ve used an automation-first approach successfully for years. Moving forward, we’re going to use our automated systems first to review more content across all types of violations. This means our systems will proactively detect and remove more content when there’s an extremely high likelihood of violation and we’ll be able to better prioritize the most impactful work for our review teams. With this change our teams will be less likely to review lower severity reports that aren’t being widely seen or shared on our platforms, and critically, they will also spend more time reviewing user appeals as well as training and measuring the quality of our automation systems. This will help us identify new trends and respond to people attempting to post violating content in an adversarial manner

We will continue working to make our platform as safe as we can by combining the strengths of people and technology to find and remove violating content faster. 



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy