Meta

January 2022 Coordinated Inauthentic Behavior Report

  • We’re sharing our monthly update on enforcements against coordinated inauthentic behavior.
  • We removed one network from Russia that targeted multiple countries in Africa. It was early in its operation when we disrupted it.

We’re constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps.

Purpose of This Report

Since 2017, we’ve published our findings about coordinated inauthentic behavior we detect and remove from our platforms. As part of our regular CIB reports, we’re sharing information about the networks we take down to make it easier for people to see the progress we’re making in one place.

Summary of January 2022 Findings

We removed a network that originated in Saint Petersburg, Russia and targeted multiple countries in Africa. We have shared information about our findings with law enforcement, policymakers and industry peers.

As it becomes harder for malicious actors to run effective operations that rely on large networks of fake accounts, we continue to see them attempting to trick real people into amplifying their content, piggy-backing off of someone else’s existing audience, rather than building their own. As we shared in our Threat Report on influence operations, such campaigns often target high-profile people, including influencers and journalists. Notably, we’ve seen this tactic in a number of the Russian IRA-associated campaigns in a shift to using cutouts, smaller networks and their own websites, likely in response to detection and repeat removals. We continue to find these networks, including the one in this report, early in their operations before they reach larger audiences.

With these operations continuing to target authentic communities, it’s critical for us all, including journalists, influencers and public figures, to keep vigilant and carefully vet information before amplifying it to avoid playing into the hands of threat actors.

We know that influence operations will keep evolving in response to our enforcement, and new deceptive tactics will emerge. To help inform the public and the research community about the changing threat environment, we will continue to expand our security reporting to new areas. Starting next quarter (Q2 2022), we will pilot sharing quarterly threat reports to provide a more comprehensive view into the risks we see across more policy violations including additional problem areas like coordinated adversarial networks targeting people with brigading, mass reporting and other harmful activities.

Finally, as we mentioned before, we’ve seen an evolution in the global threats and other risks that companies like ours face and a significant increase in safety concerns for our employees around the world. When we believe these risks warrant, we will prioritize enforcement over publishing our findings. While this change won’t impact the actions we take against deceptive operations we detect, it means that — in what we hope to be rare cases — we won’t be sharing all CIB network disruptions publicly.

Networks removed:

  1. Russia: We removed a network of three Facebook accounts operated from Saint Petersburg, Russia that targeted primarily Nigeria, Cameroon, Gambia, Zimbabwe and Congo. We detected and began investigating this early-stage operation shortly after it activated. The people behind this activity focused primarily on attempting to contact journalists in Africa to trick them into publishing articles on their behalf. Although the people behind it attempted to conceal their identities and coordination, our investigation found links to individuals associated with past activity by the Russian Internet Research Agency (IRA).

See the full report for more information.

Learn More About Coordinated Inauthentic Behavior

We view CIB as coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation. In each case, the people behind this activity coordinate with one another and use fake accounts to mislead people about who they are and what they are doing, and that is the basis for our action. When we investigate and remove these operations, we focus on behavior rather than content — no matter who’s behind them, what they post or whether they’re foreign or domestic.

Continuous Enforcement
We monitor for efforts to re-establish a presence on Facebook by networks we previously removed. Using both automated and manual detection, we continuously remove accounts and Pages connected to networks we took down in the past.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy