Taking Down Coordinated Inauthentic Behavior from Iran

Today we removed multiple Pages, Groups and accounts that originated in Iran for engaging in coordinated inauthentic behavior on Facebook and Instagram. This is when people or organizations create networks of accounts to mislead others about who they are, or what they’re doing. We prohibit coordinated inauthentic behavior on Facebook because we want people who use our services to be able to trust the connections they make.

As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Finding and removing abuse is a constant challenge. Our adversaries are smart and well funded, and as we improve their tactics change.

To ensure that we stay ahead we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on their own.

We will update this post with more details when we have them, or if the facts change.

October 26, 2018

What We’ve Found So Far

By Nathaniel Gleicher, Head of Cybersecurity Policy

This morning we removed 82 Pages, Groups and accounts for coordinated inauthentic behavior that originated in Iran and targeted people in the US and UK. The Page administrators and account owners typically represented themselves as US citizens, or in a few cases UK citizens — and they posted about politically charged topics such as race relations, opposition to the President, and immigration. Despite attempts to hide their true identities, a manual review of these accounts linked their activity to Iran. We also identified some overlap with the Iranian accounts and Pages we removed in August.

Our threat intelligence team first detected this activity one week ago. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with US and UK government officials, US law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab. However, it’s still early days and while we have found no ties to the Iranian government, we can’t say for sure who is responsible.

Our elections war room has teams from across the company, including from threat intelligence, data science, software engineering, research, community operations and legal. These groups helped quickly identify, investigate and evaluate the problem, and then take action to stop it. They will continue their investigations, including any additional information we get from law enforcement, other technology companies, or other experts. Free and fair elections are the heart of every democracy and we’re committed to doing everything we can to prevent misuse of Facebook at this critical time for our country. A sample of posts is included below.