Meta

Facebook, Elections and Political Speech

By Nick Clegg, VP of Global Affairs and Communications

Speaking at the Atlantic Festival in Washington DC today, I set out the measures that Facebook is taking to prevent outside interference in elections and Facebook’s attitude towards political speech on the platform. This is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.  

You can read the full text of my speech below, but as I know there are often lots of questions about our policies and the way we enforce them I thought I’d share the key details.  

Fact-Checking Political Speech

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

Newsworthiness Exemption

Facebook has had a newsworthiness exemption since 2016. This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards. 

Read the full speech below.

Facebook

For those of you who don’t know me, which I suspect is most of you, I used to be a politician – I spent two decades in European politics, including as Deputy Prime Minister in the UK for five years.

And perhaps because I acquired a taste for controversy in my time in politics, a year ago I came to work for Facebook.

I don’t have long with you, so I just want to touch on three things: I want to say a little about Facebook; about how we are getting ourselves ready for the 2020 election; and about our basic attitude towards political speech.

So…Facebook. 

As a European, I’m struck by the tone of the debate in the US around Facebook. Here you have this global success story, invented in America, based on American values, that is used by a third of the world’s population.

A company that has created 40,000 US jobs in the last two years, is set to create 40,000 more in the coming years, and contributes tens of billions of dollars to the economy. And with plans to spend more than $250 billion in the US in the next four years.

And while Facebook is subject to a lot of criticism in Europe, in India where I was earlier this month, and in many other places, the only place where it is being proposed that Facebook and other big Silicon Valley companies should be dismembered is here.

And whilst it might surprise you to hear me say this, I understand the underlying motive which leads people to call for that remedy – even if I don’t agree with the remedy itself.

Because what people want is that there should be proper competition, diversity, and accountability in how big tech companies operate – with success comes responsibility, and with power comes accountability.

But chopping up successful American businesses is not the best way to instill responsibility and accountability. For a start, Facebook and other US tech companies not only face fierce competition from each other for every service they provide – for photo and video sharing and messaging there are rival apps with millions or billions of users – but they also face increasingly fierce competition from their Chinese rivals. Giants like Alibaba, TikTok and WeChat.

More importantly, pulling apart globally successful American businesses won’t actually do anything to solve the big issues we are all grappling with – privacy, the use of data, harmful content and the integrity of our elections. 

Those things can and will only be addressed by creating new rules for the internet, new regulations to make sure companies like Facebook are accountable for the role they play and the decisions they take.

That is why we argue in favor of better regulation of big tech, not the break-up of successful American companies. 

Elections

Now, elections. It is no secret that Facebook made mistakes in 2016, and that Russia tried to use Facebook to interfere with the election by spreading division and misinformation. But we’ve learned the lessons of 2016. Facebook has spent the three years since building its defenses to stop that happening again.

  • Cracking down on fake accounts – the main source of fake news and malicious content – preventing millions from being created every day;
  • Bringing in independent fact-checkers to verify content;
  • Recruiting an army of people – now 30,000 – and investing hugely in artificial intelligence systems to take down harmful content.

And we are seeing results. Last year, a Stanford report found that interactions with fake news on Facebook was down by two-thirds since 2016.

I know there’s also a lot of concern about so-called deepfake videos. We’ve recently launched an initiative called the Deepfake Detection Challenge, working with the Partnership on AI, companies like Microsoft and universities like MIT, Berkeley and Oxford, to find ways to detect this new form of manipulated content so that we can identify them and take action.

But even when the videos aren’t as sophisticated – such as the now infamous Speaker Pelosi video – we know that we need to do more.

As Mark Zuckerberg has acknowledged publicly, we didn’t get to that video quickly enough and too many people saw it before we took action. We must and we will get better at identifying lightly manipulated content before it goes viral and provide users with much more forceful information when they do see it.

We will be making further announcements in this area in the near future.

Crucially, we have also tightened our rules on political ads. Political advertising on Facebook is now far more transparent than anywhere else – including TV, radio and print advertising.

People who want to run these ads now need to submit ID and information about their organization. We label the ads and let you know who’s paid for them. And we put these ads in a library for seven years so that anyone can see them.

Political speech

Of course, stopping election interference is only part of the story when it comes to Facebook’s role in elections. Which brings me to political speech.

Freedom of expression is an absolute founding principle for Facebook. Since day one, giving people a voice to express themselves has been at the heart of everything we do. We are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about.

In a mature democracy with a free press, political speech is a crucial part of how democracy functions. And it is arguably the most scrutinized form of speech that exists.

 In newspapers, on network and cable TV, and on social media, journalists, pundits, satirists, talk show hosts and cartoonists – not to mention rival campaigns – analyze, ridicule, rebut and amplify the statements made by politicians.

At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves.

To use tennis as an analogy, our job is to make sure the court is ready – the surface is flat, the lines painted, the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.

We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.

That’s why I want to be really clear today – we do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.

Of course, there are exceptions. Broadly speaking they are two-fold: where speech endangers people; and where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric.

I was an elected politician for many years. I’ve had both words and objects thrown at me, I’ve been on the receiving end of all manner of accusations and insults.

It’s not new that politicians say nasty things about each other – that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real world violence and harm.

I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse.

Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.  

Conclusion

So, in conclusion, I understand the debate about big tech companies and how to tackle the real concerns that exist about data, privacy, content and election integrity. But I firmly believe that simply breaking them up will not make the problems go away. The real solutions will only come through new, smart regulation instead.

And I hope I have given you some reassurance about our approach to preventing election interference, and some clarity over how we will treat political speech in the run up to 2020 and beyond.

Thank you.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy