Meta

Helping People Have Safe Experiences Is Best for Our Business

By Monika Bickert, Vice President of Content Policy at Meta

During an event at the European Parliament this week, MEPs will hear about leaked internal documents and criticism that Meta puts profit over the safety of our users. At the heart of this claim is a false premise. Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie.

I spent more than a decade as a criminal prosecutor in the US before joining Meta, and for the past nine years I’ve helped our company develop and enforce our content standards. These policies seek to protect people from harm while also protecting freedom of expression. Our team includes former prosecutors, law enforcement officers, human rights lawyers, counter-terrorism specialists, teachers and child safety advocates, and we work with hundreds of independent experts around the world to help us get the balance right. While people often disagree about exactly where to draw the line, regulation like the EU’s Digital Services Act can establish standards all companies should meet. 

Companies should also be judged on how their rules are enforced. Three years ago we began publishing figures on our removal of violating content, including the amount of it people actually see and how much we take down. We publish these reports every quarter and we’re subjecting them to independent audit.

Contrary to recent claims about our company, we’ve always had the commercial incentive to remove harmful content from our platform. People don’t want to see it when they use our apps and advertisers don’t want their ads next to it. That’s why we have clear rules about what isn’t allowed on our platforms, are on track to spend more than $5 billion this year alone on safety and security — more than any other tech company, even adjusted for scale — and have over 40,000 people to do one job: keep people safe on our apps. As a result, we’ve almost halved the amount of hate speech people see on Facebook over the last three quarters. Hate speech now represents only 0.05% of content views, or around 5 views per every 10,000. We’ve also got better at detecting it. Of the hate speech we removed, we found 97% before anyone reported it to us — up from just 23% a few years ago. While we have further to go, the enforcement reports show that we are making progress. 

Central to many of the charges by our critics is the idea that our algorithmic systems actively encourage the sharing of sensational content and are designed to keep people scrolling endlessly. Of course, on a platform built around people sharing things they are interested in or moved by, content that provokes strong emotions is invariably going to be shared. But the argument that we deliberately push content that makes people angry for profit is deeply illogical. Helping people have positive experiences on our apps by seeing content that is more relevant and valuable to them is in the best interests of our business over the long term. 

Our systems are not designed to reward provocative content. In fact, key parts of those systems are designed to do just the opposite. We reduce the distribution of many types of content — meaning that content appears lower in your News Feed — because they are sensational, misleading, gratuitously solicit engagement or are found to be false by our independent fact checking partners. We agree a better understanding of the relationship between people and the algorithms is in everyone’s interest, and that anyone who uses our platforms should have more control over the content that they see. That’s why we rolled out a chronological News Feed years ago, which turns off algorithmic ranking for anyone who wants that instead as well as tools such as Favorites and Why Am I Seeing This.

The rise of polarization, another issue social media companies are being held accountable for, has been the subject of serious academic research in recent years but without a great deal of consensus. But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization. Recent election results in Germany and in the Netherlands point in the same direction. Facebook is used broadly in these countries and democratic center parties gained in popularity while support for more divisive parties remained stagnant or decreased.

We know there is more to do and we’ll keep making improvements, but regulation, especially the Digital Services Act, has to be part of the solution. European policymakers are leading the way in helping to embed European values like free expression, privacy, transparency and the rights of individuals into the day-to-day workings of the internet. The DSA can be the foundation for making the internet safer while keeping the vast social and economic benefits it brings.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy