This post is part of our Hard Questions series, which addresses the impact of our products on society.
The most frequently asked questions we receive are about Facebook’s efforts to stop the spread of misinformation, prevent election interference, and protect people’s privacy.
Although we didn’t do enough to anticipate some of these risks, we’ve now made fundamental changes. This past year we’ve invested record amounts in keeping people safe and strengthening our defenses against abuse. We’ve also provided people with far more control over their information and more transparency into our policies, operations and ads.
As we begin a new year, we want to share the steps we’ve already taken in five key areas. We still face legitimate scrutiny, but we’re not the same company we were even a year ago, and we’re determined to do more to keep people safe across our services.
Battling Election Interference and Misinformation
In 2016, foreign actors interfered with elections by spreading false information and sowing division between people with different views. Now that we better understand this threat, we’ve built defenses against it — and we’re working to stay a step ahead.
Our tactics include blocking and removing fake accounts; finding and removing bad actors; limiting the spread of false news and misinformation; and bringing unprecedented transparency to political advertising. We’ve also improved our machine learning capabilities, which allow us to be more effective in finding and removing violating behavior. These technology advances help us better identify and block bad activity, while our expert investigators manually detect more sophisticated networks.
As a result, we’re making real progress. We remove millions of fake accounts a day, stopping them from ever engaging in the type of coordinated information campaigns often used to interfere with elections. And we’ve improved rapid response efforts across our teams. Last year, we removed thousands of Pages, groups and accounts involved in coordinated inauthentic behavior. With each election, we get better.
Meanwhile, we’re working far more closely with governments, outside experts and other companies. Just 48 hours before the US midterm elections, for example, we received a tip from the FBI which allowed us to quickly take down a coordinated effort by foreign entities. Just recently we announced a new partnership with German Federal Office for Information Security, which we worked closely with during the 2017 elections there. Together we’re creating a new Integrity and Security Initiative to help guide policymaking in Germany and across the EU on election interference.
While misinformation is a common tool of election interference, bad actors also use fake news for other reasons — especially to make money by tricking people into clicking on something. Whatever the motivation, we’re cracking down. When something is rated “false” by a fact-checker, we’re able to reduce future impressions of that content by an average of 80%. Independent studies also show significant improvement. Le Monde found engagement with “unreliable or dubious sites” has halved in France since 2015. Two US-based academic institutions both found that engagement with false news content has declined by half since 2016.
There’s more to do. Security is an arms race. But we’re committed to the fight.
Strengthening Privacy and Protecting People’s Information
When people add information to their profile, upload a photo, or use Facebook to log into another app, they’re entrusting us with their information. We have a deep responsibility to protect it, and we know we didn’t do a good enough job securing our platform in the past. In 2014 we significantly reduced the amount of information apps on Facebook can access. We’ve investigated thousands of apps that had access to large amounts of information before we changed our platform policy in 2014. As a result we suspended more than 400 apps that violated our policies. To better safeguard people’s information, we grew the size of our security team and established a dedicated Privacy and Data Use team to focus on building stronger protections and giving people more control over their data. We also published our privacy principles so people know how we approach privacy and can hold us accountable.
Strengthening privacy also means giving people greater control over what they share. Last year we made our privacy settings easier to find and rolled out GDPR-style controls around the world. As part of this, we asked people to review what personal information they share and what data they want us to use for ad targeting. We also built better tools for people to access and download their data. This year we’re continuing to build more controls, including Clear History, which will let people see the information we receive from the apps and websites they use, and then decide whether to clear it from their account. We’ll continue working to explain more about how data is used on Facebook and give more control and transparency over the information people share.
Prioritizing Safety and Well-Being
It’s on us to make sure people feel safe by keeping harmful, hateful and exploitative material off our services. Over the past two years, we’ve invested heavily in technology and people to more effectively remove this bad content. We now have over 30,000 people working on safety and security — about half of whom are content reviewers working out of 20 offices around the world. Thanks to their work, along with our artificial intelligence and machine learning tools, we’ve made big strides in finding and removing content that goes against our Community Standards.
We’re now detecting 99% of terrorist related content before it’s reported, 97% of violence and graphic content, and 96% of nudity. We regularly share our latest enforcement numbers and more in our Enforcement Report.
We’re also making it easier for people to report content that they think breaks our rules. For instance, we’ve expanded our reporting tools to allow people to report when someone else is being bullied or harassed. This is an important change because victims of bullying aren’t always privy to what’s being said or shared about them. We’ve also introduced a way for people to appeal decisions on bullying and harassment cases, so you can now ask for a second review if you think we made a mistake by not taking down reported content.
Beyond enforcing our standards, we’ve updated our policies to better protect the people and communities who use our services. Last year, for instance, we updated our policy against credible violence to include misinformation that has the potential to contribute to imminent violence or physical harm. This is a particularly important change in countries like Myanmar where people have used our platform to share fake news meant to foment division and incite violence. Myanmar — and countries around the world where some people seek to weaponize our platform — remain a huge area of concern for us. But we’re now taking a more proactive role at addressing our responsibility. We’ve hired more language-specific content reviewers, banned individuals and organizations that have broken our rules, and built new technology to make it easier for people to report violating content and for us to catch it proactively.
Making sure we’re having a positive impact also means supporting people’s well-being. We want to make sure people have the power to decide when and how they engage with our services. To do this, we worked with leading mental health experts, organizations and academics to launch dashboards in both Facebook and Instagram that track the amount of time spent on each app. The aim here is to give people more insight into how they spend their time. We’ve also built tools that help people control what they see on our services, so they can see more posts, photos and videos that they want to see, and avoid those they don’t.
We’ve improved News Feed quality to show people the most relevant posts with features like See First, Hide, Unfollow, and Keyword Snooze. And on Instagram, we launched powerful tools to proactively care for the community — like the “You’re All Caught Up” message in Feed, keyword filtering, sensitivity screens, and offensive comment and bullying filters.
We still have a lot of work to do when it comes to enforcing our Community Standards and keeping our community safe — but we’ve made several steps in the right direction.
Giving People More Information About the Ads They See
In the past year we’ve committed to bringing greater transparency to the ads people see on Facebook. This is particularly true with ads related to politics. All political ads on Facebook and Instagram in the US must now be labeled – including a “paid for by” disclosure from the advertiser. We also launched a searchable archive for political content that houses these ads for up to seven years. We’ve since expanded this feature to Brazil and the UK, and will soon in India.
Beyond political and issue ads, people can now see every ad a Page is running — even if the person wasn’t targeted. People can also filter ads by country and they can report an ad to us.
The vast majority of ads are run by legitimate businesses and organizations. But bad actors can misuse ads too. By shining a light on all ads and the Pages that run them, we’ll make it easier to root out abuse.
Finally, we introduced new policies requiring advertisers to specify the origin of their audience’s information when they bring a customer list to us.
Seeking Effective Regulation
Several members of Congress argue that we need more regulation of the internet. We agree. As Mark Zuckerberg testified last year, we welcome smart legislation and will work with lawmakers to achieve it.
We don’t want an internet that is out of control. That’s bad for us and bad for everyone. Effective legislation, we think, should reflect the need to strike balances between competing tensions. How do you reduce hate speech while protecting free expression? How do you protect privacy while promoting innovation? How do legislators maintain control without diminishing the global competitiveness of the tech sector?
As lawmakers around the world debate the right path, we’re moving forward on several fronts. As we mentioned above, we’ve introduced consistent privacy choices around the world. We’re working with governments to improve the safety of our platform, including a recent initiative with French regulators to reduce hate speech. And we’re establishing an independent body which people can use to appeal Facebook decisions involving potentially offensive content. Unless their decisions would somehow violate the law, we will adhere to them.
—
Taken together, these measures are just one portion of what we’re doing. You can expect more this year.