By Tessa Lyons, Product Manager
Misleading or harmful content on Facebook comes in many different flavors, from annoyances like clickbait to more damaging things like hate speech and violent content. When we detect this kind of content in News Feed, there are three types of actions we take: remove it, reduce its spread or inform people with additional context.
Our Community Standards and Ads Policies outline the things not permitted on the platform, like hate speech, fake accounts and terrorist content. When we find things that violate these standards, we remove them. There are other types of problematic content that, although they don’t violate our policies, are still misleading or harmful and that our community has told us they don’t want to see on Facebook — things like clickbait or sensationalism. When we find examples of this kind of content, we reduce its spread in News Feed using ranking and, increasingly, we inform users with additional context so they can decide whether to read, trust, or share it.
Learn more about our three-pronged approach in the video above.
See also:
News Feed Ranking in Three Minutes Flat (video)
Machine Learning, Fact-Checkers and the Fight Against False News (video)
Designing New Ways to Give Context to News Stories