Meta

How We Review Content

When we began reviewing content on Facebook over a decade ago, our system relied on people to report things they saw and thought were inappropriate. Our teams then reviewed and removed individual pieces of content if they broke our rules. A lot has changed since then. We’ve developed more transparent rules, and we’re establishing an Oversight Board that will soon begin to review some of our most difficult content decisions. But the biggest change has been the role of technology in content moderation. As our Community Standards Enforcement Report shows, our technology to detect violating content is improving and playing a larger role in content review. Our technology helps us in three main areas: 

Together, these three aspects of technology have transformed our content review process and greatly improved our ability to moderate content at scale. However, there are still areas where it’s critical for people to review. For example, discerning if someone is the target of bullying can be extremely nuanced and contextual. In addition, AI relies on a large amount of training data from reviews done by our teams in order to identify meaningful patterns of behavior and find potentially violating content. 

That’s why our content review system needs both people and technology to be successful. Our teams focus on cases where it’s essential to have people review and we leverage technology to help us scale our efforts in areas where it can be most effective. 

Using Technology More

For reviewing violations like spam, we’ve used an automation-first approach successfully for years. Moving forward, we’re going to use our automated systems first to review more content across all types of violations. This means our systems will proactively detect and remove more content when there’s an extremely high likelihood of violation and we’ll be able to better prioritize the most impactful work for our review teams. With this change our teams will be less likely to review lower severity reports that aren’t being widely seen or shared on our platforms, and critically, they will also spend more time reviewing user appeals as well as training and measuring the quality of our automation systems. This will help us identify new trends and respond to people attempting to post violating content in an adversarial manner

We will continue working to make our platform as safe as we can by combining the strengths of people and technology to find and remove violating content faster.