Meta

Remove, Reduce, Inform: New Steps to Manage Problematic Content

By Guy Rosen, VP of Integrity, and Tessa Lyons, Head of News Feed Integrity

Since 2016, we have used a strategy called “remove, reduce, and inform” to manage problematic content across the Facebook family of apps. This involves removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information so they can choose what to click, read or share. This strategy applies not only during critical times like elections, but year-round.

Today in Menlo Park, we met with a small group of journalists to discuss our latest remove, reduce and inform updates to keep people safe and maintain the integrity of information that flows through the Facebook family of apps:

REMOVE  (read more)

REDUCE (read more)

INFORM (read more)

Facebook

We have Community Standards that outline what is and isn’t allowed on Facebook. They cover things like bullying, harassment and hate speech, and we remove content that goes against our standards as soon as we become aware of it. Last year, we made it easier for people to understand what we take down by publishing our internal enforcement guidelines and giving people the right to appeal our decisions on individual posts.

The Community Standards apply to all parts of Facebook, but different areas pose different challenges when it comes to enforcement. For the past two years, for example, we’ve been working on something called the Safe Communities Initiative, with the mission of protecting people from harmful groups and harm in groups. By using a combination of the latest technology, human review and user reports, we identify and remove harmful groups, whether they are public, closed or secret. We can now proactively detect many types of violating content posted in groups before anyone reports them and sometimes before few people, if any, even see them.

Similarly, Stories presents its own set of enforcement challenges when it comes to both removing and reducing the spread of problematic content. The format’s ephemerality means we need to work even faster to remove violating content. The creative tools that give people the ability to add text, stickers and drawings to photos and videos can be abused to mask violating content. And because people enjoy stringing together multiple Story cards, we have to view Stories as holistic — if we evaluate individual story cards in a vacuum, we might miss standards violations.

In addition to describing this context and history, today we discussed how we will be:

For more information on Facebook’s “remove” work, see these videos on the people and process behind our Community Standards development.

Facebook

There are types of content that are problematic but don’t meet the standards for removal under our Community Standards, such as misinformation and clickbait. People often tell us that they don’t like seeing this kind of content and while we allow it to be posted on Facebook, we want to make sure it’s not broadly distributed.

Over the last two years, we’ve focused heavily on reducing misinformation on Facebook. We’re getting better at enforcing against fake accounts and coordinated inauthentic behavior; we’re using both technology and people to fight the rise in photo and video-based misinformation; we’ve deployed new measures to help people spot false news and get more context about the stories they see in News Feed; and we’ve grown our third-party fact-checking program to include 45 certified fact-checking partners who review content in 24 languages.

Today, members of the Facebook News Feed team discussed how we will be:

For more information about how we set goals for our “reduce” initiatives on Facebook, read this blog post.

Instagram

Today we discussed how Instagram is working to ensure that the content we recommend to people is both safe and appropriate for the community. We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages. For example, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages.

Facebook

We’re investing in features and products that give people more information to help them decide what to read, trust and share. In the past year, we began offering more information on articles in News Feed with the Context Button, which shows the publisher’s Wikipedia entry, the website’s age, and where and how often the content has been shared. We helped Page owners improve their content with the Page Quality tab, which shows them which posts of theirs were removed for violating our Community Standards or were rated “False,” “Mixture” or “False Headline” by third-party fact-checkers. We also discussed how we will be:

Messenger

At today’s event, Messenger highlighted new and updated privacy and safety features that give people greater control of their experience and help people stay informed.