Meta

Helping to Protect the 2020 US Elections

By Guy Rosen, VP of Integrity; Katie Harbath, Public Policy Director, Global Elections; Nathaniel Gleicher, Head of Cybersecurity Policy and Rob Leathern, Director of Product Management

Learn more about how we’re protecting the US 2020 elections.

Update on January 27, 2020 at 1:30PM PT: In order to continue running issue, electoral or political ads in the US, advertisers must assign a Page Owner. To help ensure all advertisers have time to complete this, we are extending our deadline to become compliant to February 8, 2020.

Original post on October 21, 2019:

We have a responsibility to stop abuse and election interference on our platform. That’s why we’ve made significant investments since 2016 to better identify new threats, close vulnerabilities and reduce the spread of viral misinformation and fake accounts. 

Today, almost a year out from the 2020 elections in the US, we’re announcing several new measures to help protect the democratic process and providing an update on initiatives already underway:

Fighting foreign interference

  • Combating inauthentic behavior, including an updated policy
  • Protecting the accounts of candidates, elected officials, their teams and others through Facebook Protect 

Increasing transparency

  • Making Pages more transparent, including showing the confirmed owner of a Page
  • Labeling state-controlled media on their Page and in our Ad Library
  • Making it easier to understand political ads, including a new US presidential candidate spend tracker

Reducing misinformation

  • Preventing the spread of misinformation, including clearer fact-checking labels 
  • Fighting voter suppression and interference, including banning paid ads that suggest voting is useless or advise people not to vote
  • Helping people better understand the information they see online, including an initial investment of $2 million to support media literacy projects

Fighting Foreign Interference

Combating Inauthentic Behavior

Over the last three years, we’ve worked to identify new and emerging threats and remove coordinated inauthentic behavior across our apps. In the past year alone, we’ve taken down over 50 networks worldwide, many ahead of major democratic elections. As part of our effort to counter foreign influence campaigns, this morning we removed four separate networks of accounts, Pages and Groups on Facebook and Instagram for engaging in coordinated inauthentic behavior. Three of them originated in Iran and one in Russia. They targeted the US, North Africa and Latin America. We have identified these manipulation campaigns as part of our internal investigations into suspected Iran-linked inauthentic behavior, as well as ongoing proactive work ahead of the US elections.

We took down these networks based on their behavior, not the content they posted. In each case, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared our findings with law enforcement and industry partners. More details can be found here.

As we’ve improved our ability to disrupt these operations, we’ve also built a deeper understanding of different threats and how best to counter them. We investigate and enforce against any type of inauthentic behavior. However, the most appropriate way to respond to someone boosting the popularity of their posts in their own country may not be the best way to counter foreign interference. That’s why we’re updating our inauthentic behavior policy to clarify how we deal with the range of deceptive practices we see on our platforms, whether foreign or domestic, state or non-state.

Protecting the Accounts of Candidates, Elected Officials and Their Teams

Today, we’re launching Facebook Protect to further secure the accounts of elected officials, candidates, their staff and others who may be particularly vulnerable to targeting by hackers and foreign adversaries. As we’ve seen in past elections, they can be targets of malicious activity. However, because campaigns are generally run for a short period of time, we don’t always know who these campaign-affiliated people are, making it harder to help protect them.

Beginning today, Page admins can enroll their organization’s Facebook and Instagram accounts in Facebook Protect and invite members of their organization to participate in the program as well. Participants will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices. And, if we discover an attack against one account, we can review and protect other accounts affiliated with that same organization that are enrolled in our program. Read more about Facebook Protect and enroll here.

Increasing Transparency

Making Pages More Transparent
We want to make sure people are using Facebook authentically, and that they understand who is speaking to them. Over the past year, we’ve taken steps to ensure Pages are authentic and more transparent by showing people the Page’s primary country location and whether the Page has merged with other Pages. This gives people more context on the Page and makes it easier to understand who’s behind it. 

Increasingly, we’ve seen people failing to disclose the organization behind their Page as a way to make people think that a Page is run independently. To address this, we’re adding more information about who is behind a Page, including a new “Organizations That Manage This Page” tab that will feature the Page’s “Confirmed Page Owner,” including the organization’s legal name and verified city, phone number or website.

Initially, this information will only appear on Pages with large US audiences that have gone through Facebook’s business verification. In addition, Pages that have gone through the new authorization process to run ads about social issues, elections or politics in the US will also have this tab. And starting in January, these advertisers will be required to show their Confirmed Page Owner. 

If we find a Page is concealing its ownership in order to mislead people, we will require it to successfully complete the verification process and show more information in order for the Page to stay up. 

 

Labeling State-Controlled Media

We want to help people better understand the sources of news content they see on Facebook so they can make informed decisions about what they’re reading. Next month, we’ll begin labeling media outlets that are wholly or partially under the editorial control of their government as state-controlled media. This label will be on both their Page and in our Ad Library. 

We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state. 

We developed our own definition and standards for state-controlled media organizations with input from more than 40 experts around the world specializing in media, governance, human rights and development. Those consulted represent leading academic institutions, nonprofits and international organizations in this field, including Reporters Without Borders, Center for International Media Assistance, European Journalism Center, Oxford Internet Institute, Center for Media, Data and Society (CMDS) at the Central European University, the Council of Europe, UNESCO and others. 

It’s important to note that our policy draws an intentional distinction between state-controlled media and public media, which we define as any entity that is publicly financed, retains a public service mission and can demonstrate its independent editorial control. At this time, we’re focusing our labeling efforts only on state-controlled media. 

We will update the list of state-controlled media on a rolling basis beginning in November. And, in early 2020, we plan to expand our labeling to specific posts and apply these labels on Instagram as well. For any organization that believes we have applied the label in error, there will be an appeals process. 

Making it Easier to Understand Political Ads

In addition to making Pages more transparent, we’re updating the Ad Library, Ad Library Report and Ad Library API to help journalists, lawmakers, researchers and others learn more about the ads they see. This includes:

  • A new US presidential candidate spend tracker, so that people can see how much candidates have spent on ads
  • Adding additional spend details at the state or regional level to help people analyze advertiser and candidate efforts to reach voters geographically
  • Making it clear if an ad ran on Facebook, Instagram, Messenger or Audience Network
  • Adding useful API filters, providing programmatic access to download ad creatives and a repository of frequently used API scripts.

In addition to updates to the Ad Library API, in November, we will begin testing a new database with researchers that will enable them to quickly download the entire Ad Library, pull daily snapshots and track day-to-day changes.

Visit our Help Center to learn more about the changes to Pages and the Ad Library

Reducing Misinformation

Preventing the Spread of Viral Misinformation
On Facebook and Instagram, we work to keep confirmed misinformation from spreading. For example, we reduce its distribution so fewer people see it on Instagram, we remove it from Explore and hashtags, and on Facebook, we reduce its distribution in News Feed. On Instagram, we also make content from accounts that repeatedly post misinformation harder to find by filtering content from that account from Explore and hashtag pages for example. And on Facebook, if Pages, domains or Groups repeatedly share misinformation, we’ll continue to reduce their overall distribution and we’ll place restrictions on the Page’s ability to advertise and monetize.

Over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share. The labels below will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker.

Much like we do on Facebook when people try to share known misinformation, we’re also introducing a new pop-up that will appear when people attempt to share posts on Instagram that include content that has been debunked by third-party fact-checkers.

In addition to clearer labels, we’re also working to take faster action to prevent misinformation from going viral, especially given that quality reporting and fact-checking takes time. In many countries, including in the US, if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checker.

Fighting Voter Suppression and Intimidation
Attempts to interfere with or suppress voting undermine our core values as a company, and we work proactively to remove this type of harmful content. Ahead of the 2018 midterm elections, we extended our voter suppression and intimidation policies to prohibit:

  • Misrepresentation of the dates, locations, times and methods for voting or voter registration (e.g. “Vote by text!”);
  • Misrepresentation of who can vote, qualifications for voting, whether a vote will be counted and what information and/or materials must be provided in order to vote (e.g. “If you voted in the primary, your vote in the general election won’t count.”); and 
  • Threats of violence relating to voting, voter registration or the outcome of an election.

We remove this type of content regardless of who it’s coming from, and ahead of the midterm elections, our Elections Operations Center removed more than 45,000 pieces of content that violated these policies more than 90% of which our systems detected before anyone reported the content to us. 

We also recognize that there are certain types of content, such as hate speech, that are equally likely to suppress voting. That’s why our hate speech policies ban efforts to exclude people from political participation on the basis of things like race, ethnicity or religion (e.g., telling people not to vote for a candidate because of the candidate’s race, or indicating that people of a certain religion should not be allowed to hold office).

In advance of the US 2020 elections, we’re implementing additional policies and expanding our technical capabilities on Facebook and Instagram to protect the integrity of the election. Following up on a commitment we made in the civil rights audit report released in June, we have now implemented our policy banning paid advertising that suggests voting is useless or meaningless, or advises people not to vote. 

In addition, our systems are now more effective at proactively detecting and removing this harmful content. We use machine learning to help us quickly identify potentially incorrect voting information and remove it. 

We are also continuing to expand and develop our partnerships to provide expertise on trends in voter suppression and intimidation, as well as early detection of violating content. This includes working directly with secretaries of state and election directors to address localized voter suppression that may only be occurring in a single state or district. This work will be supported by our Elections Operations Center during both the primary and general elections. 

Helping People Better Understand What They See Online

Part of our work to stop the spread of misinformation is helping people spot it for themselves. That’s why we partner with organizations and experts in media literacy. 

Today, we’re announcing an initial investment of $2 million to support projects that empower people to determine what to read and share — both on Facebook and elsewhere. 

These projects range from training programs to help ensure the largest Instagram accounts have the resources they need to reduce the spread of misinformation, to expanding a pilot program that brings together senior citizens and high school students to learn about online safety and media literacy, to public events in local venues like bookstores, community centers and libraries in cities across the country. We’re also supporting a series of training events focused on critical thinking among first-time voters. 

In addition, we’re including a new series of media literacy lessons in our Digital Literacy Library. These lessons are drawn from the Youth and Media team at the Berkman Klein Center for Internet & Society at Harvard University, which has made them available for free worldwide under a Creative Commons license. The lessons, created for middle and high school educators, are designed to be interactive and cover topics ranging from assessing the quality of the information online to more technical skills like reverse image search.

We’ll continue to develop our media literacy efforts in the US and we’ll have more to share soon. 



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy