Meta

How Facebook Has Prepared for the 2019 UK General Election

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with parties, MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with how MPs and candidates use our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our Community Standards – what is and isn’t allowed on Facebook – it is removed. 

Since March of this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates. We have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

But our team is not working alone; it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, the proportion of hate speech we have removed before it’s reported to us has increased significantly over the last two years, and we will be releasing new figures on this later this month.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as political advertising, campaigning on Facebook and troubleshooting guides for technical issues.

We’re working with the political parties to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re holding a series of sessions for candidates on safety and outlining the help available to address harassment on our platforms. We’ve already held dedicated sessions for female candidates in partnership with women’s networks within the parties to provide extra guidance. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.

Finally, we’re working with the Government to distribute to every candidate via returning officers in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. Our safety guides include information on a range of tools we have developed:

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the gov.uk voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.