Meta

Meta’s Adversarial Threat Report

The global threats we tackle have significantly evolved since we first started sharing our findings about Coordinated Inauthentic Behavior in 2017. In addition, adversarial networks don’t strive to neatly fit our policies or only violate one at a time. To account for this constantly shifting threat environment, we build our defenses with the expectation that they will not stop, but rather adapt and try new tactics. We add new layers of defense to address potential gaps from multiple angles. Our goal over time is to make these behaviors more costly and difficult to hide, and less effective.

Today, we’re sharing our first report that brings together multiple network disruptions for distinct violations of our security policies: Coordinated Inauthentic Behavior and two new protocols — Brigading and Mass Reporting. We shared our findings with industry peers, independent researchers, law enforcement and policymakers so we can collectively improve our defenses. We welcome feedback from external experts as we refine our approaches and reporting.

Removing a Network in Italy and France for Brigading

Brigading: We will remove any adversarial networks we find where people work together to mass comment, mass post or engage in other types of repetitive mass behaviors to harass others or silence them.

We removed a network of accounts that originated in Italy and France and targeted medical professionals, journalists, and elected officials with mass harassment. Our investigation linked this activity to an anti-vaccination conspiracy movement called V_V, publicly reported to engage in violent online and offline behaviors. The people behind this operation relied on a combination of authentic, duplicate and fake accounts to mass comment on posts from Pages, including news entities, and individuals to intimidate them and suppress their views. While we aren’t banning all V_V content, we’re continuing to monitor the situation and will take action if we find additional violations to prevent abuse on our apps. See the full Threat Report for more information about this network. 

Removing a Network in Vietnam for Mass Reporting

Mass Reporting: We will remove any adversarial networks we find where people work together to mass-report an account or content to get it incorrectly taken down from our platform.

We removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. The network coordinated to falsely report activists and other people who publicly criticized the Vietnamese government for various violations in an attempt to have these users removed from Facebook. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting tools. See the full Threat Report for more information about this network. 

November 2021 Coordinated Inauthentic Behavior Report

We removed four networks from Palestine, Poland, Belarus, and China for violating our policy on Coordinated Inauthentic Behavior (CIB). Each of these networks targeted people in multiple countries at once. This month, we’re also sharing a more detailed analysis of the network from China, including, for the first time, specific threat indicators to facilitate further research into this deceptive COVID-19-focused activity across the internet. See the full November CIB Report for more information. 

Updated on December 20, 2021 at 9:30AM PT:

Networks removed in November 2021:

  1. Palestine: We removed 110 Facebook accounts, 78 Pages, 13 Groups and 17 Instagram accounts from the Gaza Strip in Palestine that primarily targeted people in Palestine, and to a much lesser extent in Egypt and Israel. We found this activity as part of our internal investigation into suspected coordinated inauthentic behavior in the region and linked it to Hamas.
  2. Poland: We removed 31 Facebook accounts, four Groups, two Facebook Events and four Instagram accounts that we believe originated in Poland and targeted Belarus and Iraq. We found this activity as a result of our internal investigation into suspected coordinated inauthentic behavior in the region, as we monitored the unfolding crisis at the border between Belarus and the EU.
  3. Belarus: We removed 38 Facebook accounts, five Groups, and four Instagram accounts in Belarus that primarily targeted audiences in the Middle East and Europe. We found this activity as a result of our internal investigation into suspected coordinated inauthentic behavior in the region as we monitored the ongoing crisis at the border between Belarus and the EU, and we linked it to the Belarusian KGB.
  4. China: We removed 595 Facebook accounts, 21 Pages, four Groups and 86 accounts on Instagram. This network originated primarily in China and targeted global English-speaking audiences in the United States and United Kingdom, and also Chinese-speaking audiences in Taiwan, Hong Kong, and Tibet. We began looking into this activity after reviewing public reporting about the single fake account at the center of this operation. Our investigation found links to individuals in mainland China, including employees of Sichuan Silence Information Technology Co, Ltd, an information security firm, and individuals associated with Chinese state infrastructure companies located around the world.

Sharing Analysis and Data with Researchers

Since 2018, we’ve shared information about more than 150 networks we removed for CIB with independent researchers because we know that no single organization can tackle these challenges alone. With this collaboration, we’ve improved our own understanding of internet-wide security risks and have seen more investigations and reports published for other security experts to review and build on.

Over the past year and a half, we’ve been working with the CrowdTangle team at Meta to build a platform for researchers to access data about these malicious networks and compare tactics across threat actors globally and over time. In late 2020, we launched a pilot CIB archive where we’ve since shared ~100 of the recent takedowns with a small group of researchers who study and investigate influence operations. We’ve continued to improve this platform in response to feedback from teams at the Digital Forensic Research Lab at the Atlantic Council, the Stanford Internet Observatory, the Australian Strategic Policy Institute, Graphika and Cardiff University.

In the coming months, we’ll expand this archive to more researchers around the world. Through this CrowdTangle-enabled interface, researchers — including our own — will be able to visualize and analyze these operations both quantitatively and qualitatively, without the need to manually go through large spreadsheets or search for archived posts. We hope that being able to study these disrupted operations in the way that closely resembles how they appeared live would help educate people on how to spot these deceptive campaigns, including signs of coordination and inauthenticity.