Meta

Hard Questions: How Do We Measure Our Efforts to Keep Bad Content off Facebook?

Hard Questions is a series from Facebook that addresses the impact of our products on society.

By Alex Schultz, Vice President of Analytics

Measurement done right helps organizations make smart decisions about the choices they face – rather than simply relying on anecdote or intuition. To paraphrase the well-known management consultant Peter Drucker: what you measure, you can improve. It’s an axiom we try to live by when it comes to safety and security at Facebook.

We built Facebook to be a place where people can openly discuss different ideas, even ideas that some people may find controversial or offensive. But we also want to make sure our service is safe for everyone. Sometimes that is a hard balance to strike.

Last month, for the first time, we published the guidelines our review team uses to decide what stays up and what comes down on Facebook. And today, again for the first time, we’re sharing the data we use internally to measure our effectiveness in enforcing these standards. It’s work in progress – and we will likely change our methodology over time as we learn more about what’s important, and what works.

Today’s report gives you a detailed description of our internal processes and data methodology. It’s an attempt to open up about how Facebook is doing at removing bad content from our site, so you can be the judge. And it’s designed to make it easy for scholars, policymakers and community groups to give us feedback so that we can do better over time.

We can’t change the fact that people will always try to post bad things on Facebook – whether it is hate speech, terrorist propaganda or images that exploit children. But we can try to control how many times content that violates our Community Standards is seen. As the head of data analytics, I lead the team that is responsible for measuring our work in this area, so the company can better understand how effective we are at enforcing our policies.

Total Impact

The most important measure we use is impact: the harm any piece of bad content has when it’s posted on Facebook. Our conceptual formula for looking at that is pretty straightforward. We measure both how often the content is seen (“views”) and how severe the impact of each view is on the people who see it as well as the broader community:

Total impact = views of violating content x impact of violating content per view

Views are relatively easy to count. We simply measure the number of times people see something. But it’s much harder to gauge impact accurately. For example, how negatively impacted is someone who sees hate speech versus someone who sees graphic violence? And what’s the percentage difference? How do you compare the severity of terrorist propaganda to images of children being sexually abused?

The answer is we can’t numerically compare their impact. But we can categorize and prioritize these different types of harm. And yes, you’re probably thinking that’s going to be subjective. And you’re right there is an element of subjectivity involved in how we do this. But prioritization helps us devote resources most quickly to the content we find most imminently dangerous.

Here’s an example. Suppose someone posts a naked image on Facebook. That’s against our policies and we would work to remove it. In fact, our nudity filters are very effective now. But suppose that image was posted by a man seeking revenge on the woman who broke up with him? We would consider that to have a greater negative impact and we would escalate it in our removals queue – somewhat like triage in an emergency room. Every patient matters. But the most critical cases go first.

How Often is Bad Content Seen on Facebook?

Within each specific category of abuse – whether it is fake accounts, hate speech, nudity, etc. – we care most about how often content that violates our standards is actually seen relative to the total amount of times any content is seen on Facebook. This is what we call prevalence. It’s a measure of how often something is seen, not how long something stays up on Facebook. If a piece of hate speech is seen a million times in 10 minutes, that’s far worse than a piece seen 10 times in 30 minutes.

We calculate this metric by selecting a sample of content from Facebook and then labeling how much of it shouldn’t be there. You can read about this in more depth in our guide to this data. We’re looking for prevalence so we focus on how much content is seen, not how much sheer content violates our rules. In other words, we don’t treat all content equally: this means that a post seen one million times is one million times more likely to be sampled, and that’s a good thing.

We understand that some people believe we should publish metrics about how long it takes Facebook to remove content like posts or images which violate our standards. But we don’t believe time is the best metric for effectively policing our service. It’s also important to remember that we do make mistakes. Sometimes it’s because we removed content that didn’t actually violate our Community Standards (we call these false positives). Other times it’s because we didn’t remove content that we should have. Any mistake can hurt real people, and that motivates us to do better.

Facebook is investing heavily in more people to review content that is flagged. But as Guy Rosen explained two weeks ago, new technology like machine learning, computer vision and artificial intelligence helps us find more bad content, more quickly – far more quickly, and at a far greater scale, than people ever can. In fact, artificial intelligence means we can get bad content off Facebook before it’s even reported. It’s why we now measure how often Facebook flags content before it’s reported. Improving this rate over time is critical because it’s about directly reducing the negative impact bad content has on people who use Facebook.

Finding the Good

As a gay man, I have experienced some of the best and worst of what our service and the people using it can offer. Facebook helped me find a community when I felt alone and isolated. And our privacy settings helped me share with friends and still control how broadly I came out. At the same time, though, as I have become more public and given my role at Facebook, it has exposed me to torrents of abuse that have sometimes left me feeling depressed and scared.

These kinds of personal experiences make us all want to ensure that people’s experiences on Facebook are positive. I hope this report, blog and guide show you how we’re trying to do that and open up a conversation on how we can do it better. Because what you measure, you can improve.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy