Meta

Recommended Principles for Regulation or Legislation to Combat Influence Operations

By Nathaniel Gleicher, Head of Security Policy

The internet has transformed the way people connect and organize online, bringing in new voices and empowering grassroots advocates. At the same time, it has also made it possible for malicious actors to target public spaces online to manipulate or corrupt public debate. While this is not a new issue, since 2016, technology companies, security researchers, journalists and government agencies have made significant strides in understanding these threats and strengthening our collective defenses.

Our teams at Facebook have: 1) developed specific policies against Coordinated Inauthentic Behavior and other deceptive behaviors, 2) built automated detection tools to find and remove fake accounts and other violations, 3) strengthened collaboration with tech companies, civil society, and governments to help tackle this problem from multiple angles at once. We went from taking down one network engaged in influence operations in 2017 to removing over 100 networks worldwide since then, including ahead of major democratic elections around the world. We also made significant changes to how our platforms work to make it harder for these networks to operate undetected, while increasing transparency for the public around political ads, Pages and state-controlled media.

Our teams will continue to find, remove and expose these influence operations, but we know these threats extend beyond any one platform and no single organization can tackle them alone. That’s why it’s critical that there is a whole-of-society response and we have a broader discussion about what is acceptable online behavior and take steps to deter people from crossing that line.

If malicious actors coordinate off our platforms, we may not identify that collaboration. If they run campaigns that rely on independent websites and target journalists and traditional media, we and other technology companies will be limited to taking action on the components of such campaigns we see on our platforms. We know malicious actors are in fact doing both of these things — likely in response to increased enforcement against them by the major internet services. There is a clear need for a strong collective response that imposes a higher cost on people behind influence operations in addition to making these deceptive campaigns less effective.

We’ve already seen precedents in which regulators were able to apply sanctions on entities engaged in inauthentic behavior in the US. Although influence operations are unlikely to disappear anytime soon, regulations can be powerful tools in our collective response to these deceptive campaigns.

Based on the past three years of studying and taking down influence operations, our team has outlined recommendations for principles to guide the development of regulation and legislation against these deceptive campaigns that we want to share today.

We recommend regulation and legislation against cross-medium influence operations (IO) promote the following principles:

  • Transparency in Ads: Continue to increase transparency for contributions or expenditures for political advertising.
  • Reporting on Inauthentic Behavior: Work with industry and civil society experts to develop minimum disclosure frameworks, collaborative development of transparency best practices, and the sharing of lessons learned.
  • Broad Application: Cover IO broadly, rather than focusing on specific tactics only. Because IO manifests differently on different platforms and in their targeting of traditional media, narrow definitions will likely leave loopholes that attackers can exploit.
  • Increased Information Sharing: Enable greater information sharing of IO threat signals among tech companies and between platforms, civil society and government, while protecting the privacy of innocent users who may be swept up in these campaigns.
  • Deterring Violators: Impose economic, diplomatic and/or criminal penalties on the people behind IO campaigns, understanding that different penalties and mitigations apply in foreign and domestic contexts.
  • Supporting Technical Research: Support private and public innovation and collaboration on technical detection of adversarial threats such as manipulated media, including deepfakes.
  • Supporting Media and Digital Literacy: Support media and digital literacy efforts to educate people and strengthen societal resilience.


To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy