Meta

Hard Questions: Who Reviews Objectionable Content on Facebook — And Is the Company Doing Enough to Support Them?

Hard Questions is a series from Facebook that addresses the impact of our products on society.

By Ellen Silver, VP of Operations

Facebook has firm rules against posting hate speech, sharing nude pictures, or uploading videos harassing another person. Our content policy team writes these rules, drawing on their expertise in everything from counterterrorism to child sexual exploitation — and in close consultation with experts around the world. When people violate the standards, we have teams in place to respond. Our product team builds essential tools like artificial intelligence and machine learning that help us remove much of this content — in some cases before it is even seen. But technology can’t catch everything — including things where context is key like hate speech and bullying — so we also rely on another critical means of enforcement: the thousands of content reviewers we have all over the world.

This work is not easy. It sometimes means looking at disturbing or violent content — and making decisions about what action to take, mindful of both the cultural context and the Community Standards that establish our policies.

We’ve talked a lot recently about these standards and our use of AI for enforcement. But we haven’t shared much about our reviewers. This is partly for safety reasons. As we saw with the horrific shooting at YouTube’s headquarters earlier this year, content reviewers are subject to real danger that makes us wary of sharing too many details about where these teams are located or the identities of the people who review. The day-to-day challenges our reviewers face and the obscurity of their work often leads to confusion about what our reviewers actually do and whether they’re safe doing it. And in recent weeks, more people have been asking about where we draw the line for what’s allowed on Facebook and whether our content reviewers are capable of applying these standards in a fair, consistent manner around the world. This post is an attempt to dispel the mystery – to talk about how the process works, how we train reviewers and how we enforce our standards consistently.

Content review at this size has never been done before. After all, there has never been a platform where so many people communicate in so many different languages across so many different countries and cultures. We recognize the enormity of this challenge and the responsibility we have to get it right.

Who They Are

The teams working on safety and security at Facebook are doubling in size this year to 20,000. This includes our growing team of 7,500 content reviewers — a mix of full-time employees, contractors and companies we partner with. (Update on December 4, 2018 at 10:00AM PT: The teams working on safety and security at Facebook are now over 30,000. About half of this team are content reviewers — a mix of full-time employees, contractors and companies we partner with.This lets us scale globally, covering every time zone and over 50 languages. (Update on March 6, 2018 at 12:00PM PT: We now have just over 20 content review sites around the world in such countries as Germany, Ireland, Latvia, Spain, Lisbon, Philippines and the United States.) Our reviewers come from many backgrounds, reflect the diversity of our community, and bring a wide array of professional experiences, from veterans to former public sector workers.

Language proficiency is key and it lets us review content around the clock. If someone reports a Tagalog-language post in the middle of the night in the Philippines, for instance, there will always be a Tagalog-speaking reviewer — either locally or based in another time zone — that the report can be routed to for quick review. But there are also a good number of reports that don’t actually need language support — like nudity — that are assigned to reviewers around the world regardless of language. And if something is reported in a language that we don’t support 24/7, we can work with translation companies and other experts who can help us understand local context and language to assist in reviewing it.

In addition to language proficiency, we also look for people who know and understand the culture. For example we want to hire Spanish speakers from Mexico — not Spain — to review reports from Mexico as it often takes a local to understand the specific meaning of a word or the political climate in which a post is shared.

This job is not for everyone — so to set people up for success, it’s important that we hire people who will be able to handle the inevitable challenges that the role presents. Just as we look for language proficiency and cultural competency, we also screen for resiliency. This means that we look at a candidate’s ability to deal with violent imagery for example. And of course, we meet all applicable local employment laws and requirements. (Updated on July 27, 2018 at 8:00AM PT to clarify hiring practices.

First, Weeks of Intensive Training

In order to do their job well — and be well when doing it — people need good training. This means making sure our content reviewers have a strong grasp on our policies, the rationale behind them and how to apply them. Training for all reviewers, including full-time employees, contractors and partner companies, encompasses three areas:

  • Pre-training, which includes what to expect on the job. Each hire also learns how to access resiliency and wellness resources, and gets information on how to connect with a psychologist when they need additional support.
  • Hands-on Learning, including a minimum of 80 hours with a live instructor followed by hands-on practice using an actual replica of the system so new hires can practice in a “real” environment. Following hands-on training, reviewers get a report highlighting areas where they’re applying policies consistently and accurately and where they need more practice.
  • Ongoing Coaching: Once hired, all reviewers get regular coaching, refresher sessions and policy updates.

The Process on the Ground

Anyone can report a piece of content they think violates our standards. Once something is reported, it’s automatically routed to a content review team based on language or the type of violation. This way the team that has specific training in the relevant policy area reviews the report — and, if needed, can escalate it to subject matter experts on the Community Operations Escalations or the content policy teams.

Each content reviewer is then assigned a queue of reported posts to evaluate one by one. Sometimes this means looking just at the post itself to determine whether it should be allowed — such as an image containing nudity. But other times the context is key, and so additional information, like comments on the reported post, is provided as well. For instance, a word that’s historically been used as a racial slur might be shared as hate speech by one person but can be a form of self-empowerment if used by another. Context helps reviewers apply our standards and decide whether something should be left up or taken down.

We want to keep personal perspectives and biases out of the equation entirely — so, in theory, two people reviewing the same posts would always make the same decision. Of course, judgments can vary if policies aren’t sufficiently prescriptive. That’s why we always strive to clarify and improve our policies — and audit a sample of reviewer decisions each week to make sure we uncover instances where the wrong call was made. Our auditors are even audited on a regular basis. In addition, we have leadership at each office to provide guidance, as well as weekly check-ins with policy experts to answer any questions.

A common misconception about content reviewers is that they’re driven by quotas and pressured to make hasty decisions. Let me be clear: content reviewers aren’t required to evaluate any set number of posts — after all nudity is typically very easy to establish and can be reviewed within seconds, whereas something like impersonation could take much longer to confirm. We provide general guidelines for how long we think it might take to review different types of content to make sure that we have the staffing we need, but we encourage reviewers to take the time they need.

Taking Care of the Reviewers

At Facebook we have a team of four clinical psychologists across three regions who are tasked with designing, delivering and evaluating resiliency programs for everyone who works with graphic and objectionable content. This group also works with our vendor partners and their dedicated resiliency teams to help build industry standards.

All content reviewers — whether full-time employees, contractors, or those employed by partner companies — have access to mental health resources, including trained professionals onsite for both individual and group counseling. And all reviewers have full health care benefits.

We also pay attention to the environment where our reviewers work. There’s a misconception that content reviewers work in dark basements, lit only by the glow of their computer screens. At least for Facebook, that couldn’t be further from the truth. Content review offices look a lot like other Facebook offices. And because these teams deal with such serious issues, the environment they work in and support around them is important to their well-being.

Content reviewers in Essen, Germany
Content reviewers in Essen, Germany

Room to Grow

Content review at this size is uncharted territory. And to a certain extent, we have to figure it out as we go — and grow. We care deeply about the people who do this work. They are the unrecognized heroes who keep Facebook safe for all the rest of us. We owe it to our reviewers to keep them safe too.

 

Additional Resources:



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy