Meta

New Research Shows Facebook Making Strides Against False News

Update on February 7, 2019 at 5:30 p.m. PT

Like the three studies we summarized in October 2018, a new study conducted by researchers at the University of Michigan, Princeton University, University of Exeter and Washington University at St. Louis offers encouraging findings about the scale and spread of misinformation since the 2016 US elections. Namely:

This research complements previous findings, summarized in the post below, that the overall consumption of false news on Facebook has declined since the 2016 US elections. While we’re encouraged by these studies, we know that misinformation is a highly adversarial space and we’re committed to our part in the long-term effort that fighting false news will require.

Originally published on October 19, 2018:

By Tessa Lyons, Product Manager

How much false news exists on Facebook?

It’s challenging to measure because there’s a lack of consensus on how to actually define “false news.” There’s a spectrum of content — from satire to opinion to hoaxes intentionally crafted to deceive — that different people might put in that category. Misinformation is also an ever-evolving problem. Its purveyors are continually trying new tactics to seed it, so we’re constantly updating our approach to catching it.

Because it’s evolving, we’ll never be able to catch every instance of false news — though we can learn from the things we do miss. As a company, one of our biggest priorities is understanding the total volume of misinformation on Facebook and seeing that number trend downward.

Today, we’re encouraged by three new, separate pieces of research from Hunt Alcott, Matthew Gentzkow, and Chuan Yu; University of Michigan; and the French newspaper Le Monde. Using different methodologies and definitions of false news, all three reports find that the overall volume of false news on Facebook is trending downward, and as Alcott, Gentzkow and Yu note, “efforts by Facebook following the 2016 election to limit the diffusion of misinformation may have had a meaningful impact.” Facebook did not fund or provide data for this research, which was conducted on publicly available data.

First, Alcott, Gentzkow and Yu published a study on misinformation on Facebook and Twitter (PDF). The researchers began by compiling a list of 570 sites that had been identified as false news sources in previous studies and online lists. They then measured the volume of Facebook engagements (shares, comments and reactions) and Twitter shares for all stories from these 570 sites published between January 2015 and July 2018. The researchers found that on Facebook, interactions with these false news sites declined by more than half after the 2016 election, suggesting that “efforts by Facebook following the 2016 election to limit the diffusion of misinformation may have had a meaningful impact.”

Last week, a University of Michigan study on misinformation (PDF) had similar findings about the effectiveness of our work. The Michigan team compiled a list of sites that commonly share misinformation by looking at judgements made by two external organizations, Media Bias/Fact Check and Open Sources. Because this categorization is based on somewhat “imprecise criteria and fallible human judgments,” the researchers lightheartedly refer to these sites as “Iffy” sites and have coined a metric called the “Iffy Quotient” to measure how much content from those sites has been distributed on Facebook and Twitter.

The Iffy Quotient for Facebook spiked in 2016, leading up to the US election, but improved beginning in mid-2017. When an “engagement-weighted” version of the Iffy Quotient is considered — that is, when social media interactions like likes, comments, and shares are factored in — the study finds that Facebook now has 50% less “Iffy Quotient content” than Twitter and has returned to its early 2016 levels. The researchers cite some of our recent efforts, noting that, “Facebook may have been more successful at detecting and countering fake accounts and manipulation campaigns, more aggressive in discounting ranking signals that are associated with Iffy sites, or more aggressive in demoting particular articles and sources.”

Slate, The Verge and The Hill summarize the studies well and provide some interesting analysis if you’re interested in diving further into the findings.

Les Décodeurs, the fact-checking arm of the French newspaper Le Monde, also released research this week that analyzed social media engagement from Facebook, Twitter, Pinterest and Reddit on 630 French websites, which they categorized into “unreliable or dubious sites” — the worst of the worst — “less reliable sites,” and “recommendable sites.” Like Alcott, Gentzkow, and Yu, Les Décodeurs also found that Facebook engagement with “unreliable or dubious sites” has halved in France since 2015. Les Décodeurs found that the Facebook engagement with the “less reliable” sites on its list followed similar curves from 2015 to the beginning of 2017, and is now at a level significantly lower than it was in 2015, where “more recommendable” sites have maintained their audience.

As these studies attest, we’ve invested heavily in our strategy to fight misinformation since the 2016 US elections. We continue to roll out updates that address some of the outstanding gaps mentioned in these studies. Because all of us have a responsibility to curb the spread of false news, we’re also collaborating with third parties, including academics, to help solve this challenging issue. We’re learning from academics, scaling our partnerships with third-party fact-checkers and talking to other bodies like civil society organizations and journalists about how we can work together.