Meta

Removing Additional Inauthentic Activity from Facebook

By Nathaniel Gleicher, Head of Cybersecurity Policy and Oscar Rodriguez, Product Manager 

People need to be able to trust the connections they make on Facebook. It’s why we have a policy banning coordinated inauthentic behavior — networks of accounts or Pages working to mislead others about who they are, and what they are doing. This year, we’ve enforced this policy against many Pages, Groups and accounts created to stir up political debate, including in the US, the Middle East, Russia and the UK. But the bulk of the inauthentic activity we see on Facebook is spam that’s typically motivated by money, not politics. And the people behind it are adapting their behavior as our enforcement improves.

One common type of spam has been posts that hawk fraudulent products like fake sunglasses or weight loss “remedies.” But a lot of the spam we see today is different. The people behind it create networks of Pages using fake accounts or multiple accounts with the same names. They post clickbait posts on these Pages to drive people to websites that are entirely separate from Facebook and seem legitimate, but are actually ad farms. The people behind the activity also post the same clickbait posts in dozens of Facebook Groups, often hundreds of times in a short period, to drum up traffic for their websites. And they often use their fake accounts to generate fake likes and shares. This artificially inflates engagement for their inauthentic Pages and the posts they share, misleading people about their popularity and improving their ranking in News Feed. This activity goes against what people expect on Facebook, and it violates our policies against spam.

Topics like natural disasters or celebrity gossip have been popular ways to generate clickbait. But today, these networks increasingly use sensational political content – regardless of its political slant – to build an audience and drive traffic to their websites, earning money for every visitor to the site. And like the politically motivated activity we’ve seen, the “news” stories or opinions these accounts and Pages share are often indistinguishable from legitimate political debate. This is why it’s so important we look at these actors’ behavior – such as whether they’re using fake accounts or repeatedly posting spam – rather than their content when deciding which of these accounts, Pages or Groups to remove.

Today, we’re removing 559 Pages and 251 accounts that have consistently broken our rules against spam and coordinated inauthentic behavior. Given the activity we’ve seen — and its timing ahead of the US midterm elections — we wanted to give some details about the types of behavior that led to this action. Many were using fake accounts or multiple accounts with the same names and posted massive amounts of content across a network of Groups and Pages to drive traffic to their websites. Many used the same techniques to make their content appear more popular on Facebook than it really was. Others were ad farms using Facebook to mislead people into thinking that they were forums for legitimate political debate.

Of course, there are legitimate reasons that accounts and Pages coordinate with each other — it’s the bedrock of fundraising campaigns and grassroots organizations. But the difference is that these groups are upfront about who they are, and what they’re up to. As we get better at uncovering this kind of abuse, the people behind it — whether economically or politically motivated — will change their tactics to evade detection. It’s why we continue to invest heavily, including in better technology, to prevent this kind of misuse. Because people will only share on Facebook if they feel safe and trust the connections they make here.