Meta

More Information About Last Week’s Takedowns

Over the last year, we have been much more proactive in finding and removing inauthentic behavior, including from foreign actors. To stay ahead, we need to work closely with government, the security community and other tech companies. Last week’s takedowns were a good example of that work in action.

On November 4, the FBI tipped us off about online activity that they believed was linked to foreign entities. Based on this tip, we quickly identified a set of accounts that appeared to be engaged in coordinated inauthentic behavior, which is banned on Facebook because we want people to be able to trust the connections they make on our services. So we immediately blocked these accounts, and given the timing just before the US midterm elections, publicly announced what we found and the action we were taking. We also shared that information with the government and other companies to help them with their own investigations. (Updated on November 13, 2018 at 10:15AM PT to correct the date we were contacted by the FBI.) 

Today, we are providing an update on what we’ve learned.

What We’ve Found So Far
How We Work With Our Partners to Combat Information Operations
Sample Content
Analysis of French Language Material (Updated on November 27, 2018 at 9:00PM PT to include more information about the French language material)

November 27, 2018

Analysis of French Language Material

By Nathaniel Gleicher, Head of Cybersecurity Policy

On November 13, we published more information about the 36 Facebook accounts, 6 Pages and 99 Instagram accounts that we discovered after receiving an initial tip from the FBI. The majority of the material posted by these accounts was in English.

As we have continued to investigate, we did an additional analysis of 12 of the Instagram accounts that posted primarily in French. These accounts had roughly 76,000 followers, with around 12,400 located in France. As we saw with the English language material, these accounts posted about a variety of social and political topics, for example French patriotism, football, ethnic and religious groups, anti-war sentiment, feminism, immigration, and policies of the French government — including posts mentioning President Macron from 5 of the 12 accounts.

Similar to some of our past takedowns, we have been in touch with the Atlantic Council’s Digital Forensics Lab. They have prepared a more detailed report about the French language content.

 

November 13, 2018

What We’ve Found So Far

By Nathaniel Gleicher, Head of Cybersecurity Policy

As we’ve continued to investigate, we detected and removed some additional Facebook and Instagram accounts. Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017, apart from a few outliers. We found a total of about 1.25 million people who followed at least one of these Instagram accounts, with just over 600,000 located in the US. By comparison, the recent set of accounts that we removed which originated from Iran had around 1 million followers. We didn’t find any Facebook events. (Updated on November 13, 2018 at 2:20PM PT to clarify total number of followers across all Instagram accounts.) 

We found a total of about 65,000 followers of at least one of the Facebook Pages, which contained posts almost exclusively in French. About 60 followers were located in the US. There was about $4,500 in ad spend from these Pages, and none of the ads ran in the US. We didn’t find any ad spend on Instagram, and these accounts seem to have mostly been in English. Below we have included some examples of the content that was being shared: there were a lot of posts about celebrities, as well as the kind of social issues we’ve seen before, for example women’s rights and LGBT pride.

We’ve previously discussed how challenging it can be to say for certain who is behind this type of activity and what their motives may be. Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest.

Ultimately, this effort may have been connected to the IRA, but we aren’t best placed to say definitively whether that is the case. As multiple independent experts have pointed out, trolls have an incentive to claim that their activities are more widespread and influential than may be the case. That appears to be true here as well.

What’s clear is that as we improve, our opponents will change their tactics and improve, too. They are smart, well-funded and have every incentive to continue their efforts, even if some of their actions have very little impact. To stay ahead of this misuse, we need to continue to invest heavily in security, as well as our work with governments and other technology companies. It will take the combined efforts of the public and private sectors to prevent foreign interference in elections.

 

November 13, 2018

How We Work With Our Partners to Combat Information Operations

By Nathaniel Gleicher, Head of Cybersecurity Policy

Preventing misuse on Facebook is a priority for our company. In the lead-up to last week’s US midterms, our teams were closely monitoring for any abnormal activity that might have been a sign of people, Pages or Groups misrepresenting themselves in order to mislead others.

But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts. When it comes to coordinated inauthentic behavior — people or organizations working together to create networks of accounts and Pages to mislead others about who they are, or what they’re doing — the more we know, the better we can be at understanding and disrupting the network. This, in turn, makes it harder for these actors to start operating again.

To get this information, we work with governments and law enforcement agencies, cybersecurity researchers, and other technology companies. When appropriate, we also share what we know with these groups to help aid their investigations and crack down on bad actors. After all, these threats are not limited to a specific type of technology or service and have far-reaching repercussions. The better we can be at working together, the better we’ll do by our community.

These partnerships were especially critical in the lead-up to last week’s midterm elections. As our teams monitored for and rooted out new threats, the government proved especially valuable because of their broader intelligence work. As bad actors seemingly tried to create a false impression of massive scale and reach, experts from government, industry, civil society, and the media worked together to counter that narrative. As we continue to build our capability to identify and stop information operations, these partnerships will only grow more valuable. This is why today, I want to share more about how we work with each of these groups — and some of the inevitable challenges that come along with this collaboration.

Government & Law Enforcement

With backgrounds in cybersecurity, digital forensics, national security, foreign policy and law enforcement, the experts on our security team investigate suspicious behavior on our services. While we can learn a lot from analyzing our own platforms, law enforcement agencies can draw connections off our platform to a degree that we simply can’t. For instance, our teams can find links between accounts that might be coordinating an information operation based on how they interact on Facebook or other technical signals that link the accounts together — while a law enforcement agency could identify additional links based on information beyond our scope.

Tips from government and law enforcement partners can therefore help our security teams attribute suspicious behavior to certain groups, make connections between actors, or proactively monitor for activity targeting people on Facebook. And while we can remove accounts and Pages and prohibit bad actors from using Facebook, governments have additional tools to deter or punish abuse. That’s why we’re actively engaged with the Department of Homeland Security, the FBI, including their Foreign Influence Task Force, Secretaries of State across the US — as well as other government and law enforcement agencies around the world — on our efforts to detect and stop information operations, including those that target elections.

There are also inherent challenges to working with governments around the world. When information is coming to us from a law enforcement agency, we need to vet the source and make sure we’re responding the right way based on the credibility of the threat and information we learn from our investigation. And sharing information with law enforcement agencies introduces additional complexities and challenges. We’re particularly cautious when it comes to protecting people’s privacy and safety — we have a rigorous vetting process to evaluate whether and to what extent to comply with government requests and we deny requests that we think are too broad or need additional information. This is true for any government request, including those around coordinated inauthentic behavior.

Cybersecurity Researchers

Our partnerships with third-party security experts are also key as we combat threats. We have established contracts with various cybersecurity research firms and academic institutions who can help us discover vulnerabilities in our systems to make our defenses stronger. We’ll often turn to these groups when we suspect a threat from a certain actor or region. They’ll then combine deep computer learning and human expertise to detect patterns from the outside in — and will alert us to signs or behaviors that suggest a real likelihood of a security risk or threat. At that point, we’ll be able to launch an internal investigation to learn more. At other times, these groups identify suspicious activity on their own, without guidance from us.

This past July, for instance, FireEye, one of our cybersecurity vendors, alerted us to a network of Pages and accounts originating from Iran that were engaged in coordinated inauthentic behavior. Based on that tip, we investigated, identified, and removed additional accounts and Pages from Facebook.

We also partner closely the Atlantic Council’s Digital Forensic Research Lab, which provides us with real-time updates on emerging threats and disinformation campaigns around the world. They assisted in our takedown of 32 Pages and accounts from Facebook and Instagram for coordinated inauthentic behavior in July of this year as well as our our recent takedown of a financially motivated “like” farm in Brazil. In these cases, they’ve let us increase the number of “eyes and ears” we have working to spot potential abuse so we can identify threats and get ahead of future ones.

It can be challenging to coordinate the operations and timing of these investigations, though. As Chad Greene noted in his earlier post on when to take action against a threat, timing is key to our success and the more entities involved, the harder it inevitably is to get everyone synced seamlessly. That’s why it’s so important to have open lines of communication with all of these partners so we can ensure we’re all aligned, and that we take action in timeline that best disrupts the adversary.

Tech Industry

Threats are rarely confined to a single platform or tech company. If another company identifies a threat, we want to know about it so we can investigate whether the actor or actors behind it are abusing our services as well. Likewise, if we find indications of coordinated inauthentic behavior that might extend beyond our platform, we want to give others a heads up.

That’s why we’ve worked closely with our fellow tech companies, both bilaterally and as a collective, to deal with the threats we have all seen during and beyond elections. This includes sharing information about the kinds of behavior we’re seeing on our respective platforms and discussing best practices when it comes to preventing our services from being abused.

Collaboration in Action

These partnerships all proved critical in our work ahead of the US midterms. In September, we launched our first elections war room at our Menlo Park headquarters — a place where the right subject-matter experts from across the company gathered to address potential problems in real time and respond quickly. A big part of centralizing key information in the war room was receiving valuable information from our government, cybersecurity, and tech industry partners and taking the appropriate action.

We have an important role in protecting people and public debate on our platform, and we are focused on that mission. Security, though, is bigger than just Facebook. We are — and will continue to be — most effective when we draw on available resources and information. Our partnerships are a key part of the effort and will play a vital role as we prepare for other elections around the world.

Back to Top
What We’ve Found So Far
How We Work With Our Partners to Combat Information Operations
Sample Content



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy