Meta

Responding to The Guardian: A Fact-Check on Fact-Checking

By Meredith Carden, Head of News Integrity Partnerships

Today The Guardian published a story about our third-party fact-checking program. We’d like to provide a response, as the piece presents several inaccuracies, and is based primarily on the account of a single fact-checker who hasn’t been involved with the Facebook fact-checking program for six months. We provided information to The Guardian, but they chose not to include all of it.

We have been committed to fighting misinformation for years now and have strong relationships with our third-party fact-checking partners — we now have 35 partners in 24 countries around the world. We value our ongoing partnerships and the work that these journalists do, and we’re planning to expand the program to even more countries in 2019.

Ensuring Process Integrity and Avoiding Conflicts of Interest

Contrary to a claim in the story, we absolutely do not ask fact-checkers to prioritize debunking content about our advertisers.

In reality, here’s how fact-checking works: the primary way we surface potentially false news to third-party fact-checkers is via machine learning, which relies on a number of signals like feedback from people who use Facebook and the number of comments expressing disbelief (e.g., “No way this is real!”). Fact-checkers then go through a list of this potentially false content and choose for themselves what to fact-check — they are under no obligation to fact-check anything from the list, and if they’d like, they can rate stories that Facebook hasn’t added to the list (which they often do). As soon as something is rated “false,” it is automatically de-prioritized in News Feed, and where it does appear, we’ll show Related Articles including the fact-checker’s article below it. These processes are automated.

Efficacy of Fact-Checking

Fact-checking is highly effective in fighting misinformation: when something is rated “false” by a fact-checker, we’re able to reduce future impressions of that content by an average of 80%. We also leverage these ratings to take action on Pages and websites that repeatedly share misinformation. We de-prioritize all content from actors who repeatedly get “false” ratings on content they share, and we remove their advertising and monetization rights.

Three new separate pieces of research have all found that the overall volume of false news on Facebook is decreasing since we put our third-party fact-checking program and other anti-misinformation measures in place. We’re also providing independent researchers at Social Science One with a privacy-protected data set that will help them study the effects of misinformation on social media and elections. This research may help us better measure volumes of false news — and our progress against it — over time.

We have heard feedback from our partners that they’d like more data on the impact of their efforts, so we’re starting to send fact-checkers quarterly reports that include customized statistics that reflect the work and impact of each fact-checker.

Safety of Journalists and Fact-Checking Partners

We take the safety of journalists seriously. Through the Facebook Journalism Project, we provide online safety resources, including information on how to turn on two-factor authentication, manage privacy settings, block harassment, control location sharing, report abusive content and impersonation, and more. Our Community Standards on credible violence aim to protect journalists and other vulnerable people or groups. We remove content, disable accounts, and work with local authorities when we become aware of content that we believe poses a genuine risk of physical harm or direct threats to safety.

Fact-Checking Ratings

We share specific rating guidelines with our third-party partners, and we make these publicly available so publishers and others have insight into our program. We’ve also started to provide safety training as part of onboarding our new partners, and we’re working to expand this to our existing partners as well.

Misinformation is an ever-evolving problem that we’re committed to fighting globally, and the work that third-party fact-checkers do to help review content on Facebook is a valued and important piece of this effort.