Meta

Facebook Does Not Benefit from Hate

By Nick Clegg, VP of Global Affairs and Communications

This piece originally ran in AdAge.

When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society — with more than 3 billion people using Facebook’s apps every month, everything that is good, bad and ugly in our societies will find expression on our platform. That puts a big responsibility on Facebook and other social media companies to decide where to draw the line over what content is acceptable.

Facebook has come in for much criticism in recent weeks following its decision to allow controversial posts by President Trump to stay up, and misgivings on the part of many people, including companies that advertise on our platform, about our approach to tackling hate speech. I want to be unambiguous: Facebook does not profit from hate. Billions of people use Facebook and Instagram because they have good experiences — they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it.

More than 100 billion messages are sent on our services every day. That’s all of us, talking to each other, sharing our lives, our opinions, our hopes and our experiences. In all of those billions of interactions a tiny fraction are hateful. When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them. When content falls short of being classified as hate speech — or of our other policies aimed at preventing harm or voter suppression — we err on the side of free expression because, ultimately, the best way to counter hurtful, divisive, offensive speech, is more speech. Exposing it to sunlight is better than hiding it in the shadows.

Unfortunately, zero tolerance doesn’t mean zero incidences. With so much content posted every day, rooting out the hate is like looking for a needle in a haystack. We invest billions of dollars each year in people and technology to keep our platform safe. We have tripled — to more than 35,000 — the people working on safety and security. We’re a pioneer in artificial intelligence technology to remove hateful content at scale.

And we’re making real progress. A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter. Last month, we reported that we find nearly 90% of the hate speech we remove before someone reports it — up from 24% little over two years ago. We took action against 9.6 million pieces of content in the first quarter of 2020 — up from 5.7 million in the previous quarter. And 99% of the ISIS and Al Qaeda content we remove is taken down before anyone reports it to us.

We are getting better — but we’re not complacent. That’s why we recently announced new policies and products to make sure everyone can stay safe, stay informed, and ultimately use their voice where it matters most — voting. We understand that many of our critics are angry about the inflammatory rhetoric President Trump has posted on our platform and others, and want us to be more aggressive in removing his speech. As a former politician myself, I know that the only way to hold the powerful to account is ultimately through the ballot box. That is why we want to use our platform to empower voters to make the ultimate decision themselves, on election day. This Friday every Facebook user of voting age in the US will be given information, prominently displayed on the top of their News Feed, on how to register to vote. This will be one step in the largest voter information campaign in US history, with a goal of registering 4 million voters. We have also been updating our policies to crack down on voter suppression. Many of these changes are a direct result of feedback from the civil rights community — we’ll keep working with them and other experts as we adjust our policies to address new risks as they emerge.

Of course, focusing on hate speech and other types of harmful content on social media is necessary and understandable, but it is worth remembering that the vast majority of those billions of conversations are positive.

Look at what happened when the coronavirus pandemic took hold. Billions of people used Facebook to stay connected when they were physically apart. Grandparents and grandchildren, brothers and sisters, friends and neighbors. And more than that, people came together to help each other. Thousands and thousands of local groups formed — millions of people came together — in order to organize to help the most vulnerable in their communities. Others, to celebrate and support our healthcare workers. And when businesses had to close their doors to the public, for many Facebook was their lifeline. More than 160 million businesses use Facebook’s free tools to reach customers, and many used these tools to help them keep their businesses afloat when their doors were closed to the public — saving people’s jobs and livelihoods.

Importantly, Facebook helped people to get accurate, authoritative health information. We directed more than 2 billion people on Facebook and Instagram to information from the World Health Organization and other public health authorities, with more than 350 million people clicking through.  

And it is worth remembering that when the darkest things are happening in our society, social media gives people a means to shine a light. To show the world what is happening, to organize against hate and come together, and for millions of people around the world to show their solidarity. We’ve seen that all over the world on countless occasions — and we are seeing it right now with the Black Lives Matter movement.

We may never be able to prevent hate from appearing on Facebook entirely, but we are getting better at stopping it all the time. 



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy