Meta

Removing New Types of Harmful Networks

By Nathaniel Gleicher, Head of Security Policy

My team coordinates our cross-company effort to find and stop adversarial actors from targeting people on our platform — from covert influence operations and targeted cyber espionage activity to scammers and spammers. As we’ve improved our ability to disrupt these networks globally, we have continued to build a deeper understanding of the different types of threats out there — including by authentic users — and how best to counter them. While our work began with tackling inauthentic operations where people hide who’s behind them, we have also seen authentic actors engage in adversarial and harmful behaviors on our platform and across the internet. In our recent Threat Report, we called out the trend in which threat actors deliberately blur the lines between authentic and inauthentic activities, making enforcement more challenging across our industry.

Over the past several months, we have been working with teams across Facebook to expand our network disruption efforts so we can address threats that come from groups of authentic accounts coordinating on our platform to cause social harm. Today, we’re sharing our enforcement against a network of accounts, Pages and Groups operated by individuals associated with the Querdenken movement in Germany. In this post, I’ll share more about our thinking and the enforcement protocols we put in place to tackle coordinated social harm.

What Is Coordinated Social Harm?

Coordinated social harm campaigns typically involve networks of primarily authentic users who organize to systematically violate our policies to cause harm on or off our platform. We already remove violating content and accounts under our Community Standards, including for incitement to violence; bullying and harassment; or harmful health misinformation.

However, we recognize that, in some cases, these content violations are perpetrated by a tightly organized group, working together to amplify their members’ harmful behavior and repeatedly violate our content policies. In these cases, the potential for harm caused by the totality of the network’s activity far exceeds the impact of each individual post or account. To address these organized efforts more effectively, we’ve built enforcement protocols that enable us to take action against the core network of accounts, Pages and Groups engaged in this behavior. As part of this framework, we may take a range of actions, including reducing content reach and disabling accounts, Pages and Groups.

Taking Action Against a Querdenken-Linked Network in Germany

We removed a network of Facebook and Instagram accounts, Pages and Groups for engaging in coordinated efforts to repeatedly violate our Community Standards, including posting harmful health misinformation, hate speech and incitement to violence. We also blocked their domains from being shared on our platform. This network was operated by individuals associated with the Querdenken movement in Germany, which is linked to off-platform violence and other social harms.

The people behind this activity used authentic and duplicate accounts to post and amplify violating content, primarily focused on promoting the conspiracy that the German government’s COVID-19 restrictions are part of a larger plan to strip citizens of their freedoms and basic rights. This activity appeared to run across multiple internet services and the broader internet and typically portrayed violence as the way to overturn the pandemic-related government measures limiting personal freedoms. Based on public reporting, this group engaged in physical violence against journalists, police and medical practitioners in Germany.

This network consistently violated our Community Standards against harmful health misinformation, incitement of violence, bullying, harassment and hate speech, and we repeatedly took action against their violating posts. While we aren’t banning all Querdenken content, we’re continuing to monitor the situation and will take action if we find additional violations to prevent abuse on our platform and protect people using our services.

We have shared information about our findings with industry peers, researchers, law enforcement and policymakers. In doing so, we hope to advance public understanding of this evolving space, including the potential harms to society it represents. In addition, we welcome feedback from the research and expert communities and will continue to refine and strengthen this work against abuse on our platform.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy