By Katie Harbath, Global Politics and Government Outreach Director and Samidh Chakrabarti, Director of Product Management, Civic Engagement
Over the past two years, we have made massive investments to help protect the integrity of elections — not only addressing threats we’ve seen on our platform in the past, but also anticipating new challenges and responding to new risks.
Our approach to this problem — like the problem itself — is multifaceted. Our tactics include blocking and removing fake accounts; finding and removing bad actors; limiting the spread of false news and misinformation; and bringing unprecedented transparency to political advertising. To support this work, we now have more than 30,000 people working on safety and security across the company, three times as many as we had in 2017. We have also improved our machine learning capabilities, which allows us to be more efficient and effective in finding and removing violating behavior. These improvements have helped in many ways, including our work to fight coordinated inauthentic behavior. For example, technology advancements have allowed us to better identify and block bad activity, while our expert investigators manually search for and take down more sophisticated networks.
We do all of this while working closely with law enforcement, regulators, election commissions, other technology companies, researchers, academics and civil society groups. While these efforts are global, we also customize our work to individual countries based on research and threat assessments that begin many months before ballots are cast.
Here are some of the additional ways we are working to strengthen our platform ahead of elections in 2019.
Expanding Our Political Advertising Transparency Policies
Earlier this month in Nigeria, we began temporarily disallowing electoral ads purchased from outside the country ahead of the election and will implement the same policy in Ukraine ahead of their election. In advance of the European Parliament election, in late March we will launch additional tools in the EU to help prevent foreign interference and make political and issue advertising on Facebook more transparent. Advertisers will need to be authorized to purchase political ads; we’ll give people more information about ads related to politics and issues; and we’ll create a publicly searchable library of these ads for up to seven years. The library will include information on the range of the ads’ budget, number of people they reached and demographics of who saw the ad, including age, gender and location. These transparency tools for electoral ads will also launch in India in February and in Ukraine and Israel before their elections, with a global expansion before the end of June.
More Resources for Rapid Response for Elections in Europe and Asia
To expand on work we did to fight misinformation in advance of the Brazil presidential election and the US midterms, we are planning to set up two new regional operations centers, focused on election integrity, located in our Dublin and Singapore offices. This will allow our global teams to better work across regions in the run-up to elections, and will further strengthen our coordination and response time between staff in Menlo Park and in-country. These teams will add a layer of defense against fake news, hate speech and voter suppression, and will work cross-functionally with our threat intelligence, data science, engineering, research, community operations, legal and other teams.
Growing Our Capacity to Address Misinformation and False News
Our work to fight fake news continues to improve. Across News Feed, we follow a three-part framework to improve the quality and authenticity of stories. First, we remove content that violates our Community Standards, which help enforce the safety and security of the platform. Then, for content that does not directly violate our Community Standards, but still undermines the authenticity of the platform — like clickbait or sensational material — we reduce its distribution in News Feed, so less people see it. Finally, we inform people by giving them more context on the information they see in News Feed. For example, when someone comes across a story, they can tap on “About this article” to see more details on the article and the publisher.
We also continue to expand our third-party fact-checking program. Today, this program covers content in 16 languages, and we’ve rolled out the ability for fact-checkers to review photos and videos in addition to article links, because we know multimedia-based misinformation is making up a greater share of false news. When a fact-checker rates a post as false, we show it lower in News Feed to significantly reduce the number of people who see it and the likelihood it will spread further. Pages and domains that repeatedly share false news will be penalized with reduced distribution and they will not be able to monetize or advertise on Facebook. This helps curb the spread of financially-motivated false news.
While these efforts represent an improvement over the past few years, we know that improving security is never finished. There have always been people trying to undermine democracy. We are up against determined adversaries who try to attack on many fronts and we recognize our role and responsibility. We will never stop all the bad actors, but we’re making real progress and we are committed to continuing to improve.
For more on our work ahead of the European Parliament Elections in May, see here.