Meta

How We’re Tackling Misinformation Across Our Apps

By Guy Rosen, VP, Integrity

Originally published in Morning Consult.

This week, the House Energy and Commerce Committee will examine how technology platforms like Facebook are tackling misinformation online. It is tempting to think about misinformation as a single challenge that can be solved with a single solution. But unfortunately, that’s not the case. Thinking of it that way also misses the opportunity to address it comprehensively. Tackling misinformation actually requires addressing several challenges including fake accounts, deceptive behavior, and misleading and harmful content. As the person responsible for the integrity of our products, I wanted to provide an update on how we approach each of them.

Let’s start with fake accounts. We take a hard line against this activity and block millions of fake accounts each day, most of them at the time of creation. Between October and December of 2020, we disabled more than 1.3 billion of them. We also investigate and take down covert foreign and domestic influence operations that rely on fake accounts. Over the past three years, we’ve removed over 100 networks of coordinated inauthentic behavior (CIB) from our platform and keep the public informed about our efforts through our monthly CIB reports. 

We also crack down on deceptive behavior. We’ve found that one of the best ways to fight this behavior is by disrupting the economic incentives structure behind it. We’ve built teams and systems to detect and enforce against inauthentic behavior tactics behind a lot of clickbait. We also use artificial intelligence to help us detect fraud and enforce our policies against inauthentic spam accounts.

Misinformation can also be posted by people, even in good faith. To address this challenge, we’ve built a global network of more than 80 independent fact-checkers, who review content in more than 60 languages. When they rate something as false, we reduce its distribution so fewer people see it and add a warning label with more information for anyone who sees it. We know that when a warning screen is placed on a post, 95% of the time people don’t click to view it. We also notify the person who posted it and we reduce the distribution of Pages, Groups, and domains who repeatedly share misinformation. For the most serious kinds of misinformation, such as false claims about COVID-19 and vaccines and content that is intended to suppress voting, we will remove the content.

Over the past several years, we have invested in protecting our community and we now have over 35,000 people working on these challenges. We’re making progress thanks to these significant investments in both people and in technology such as Artificial Intelligence. Since the pandemic began, we’ve used our AI systems to take down COVID-19-related material that global health experts have flagged as misinformation and then detect copies when someone tries to share them. As a result, we’ve removed more than 12 million pieces of content about COVID-19 and vaccines.

But it’s not enough to just limit misinformation that people might see. We also connect people to reliable information from trusted experts. We do this through centralized hubs like our COVID-19 Information Center, Climate Science Information Center or US 2020 Voting Information Center, labels that we attach to certain posts with reliable information from experts, and notifications that we run in people’s feeds on both Facebook and Instagram. 

Despite all of these efforts, there are some who believe that we have a financial interest in turning a blind eye to misinformation. The opposite is true. We have every motivation to keep misinformation off of our apps and we’ve taken many steps to do so at the expense of user growth and engagement.

For example, in 2018 we changed our News Feed ranking system to connect people to meaningful posts from their friends and family. We made this change knowing that it would reduce some of the most engaging forms of content, like short form video, and lead to people spending less time on Facebook — which is exactly what happened. The amount of time people spent on Facebook decreased by roughly 5% in the quarter where we made this change. 

As with every integrity challenge, our enforcement will never be perfect even though we are improving it all the time. While nobody can eliminate misinformation from the internet entirely, we continue using research, teams, and technologies to tackle it in the most comprehensive and effective way possible. 



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy