We want to take the opportunity to respond to some of the questions resulting from the PBS Frontline documentary that aired Monday and Tuesday night. While it raised a lot of valid concerns, it also presented a very narrow view of Facebook and failed to consider the positive impact of our services around the world.
Looking back, we agree we were too slow to respond to people abusing Facebook and made some serious mistakes. We’re fortunate to play an important role in people’s lives — that’s a privilege and a responsibility we take very seriously. That’s why we’re taking strong action to keep people safe, fight fake news, protect the integrity of elections, and to protect people’s information and privacy.
We know we have a long way to go to earn people’s trust back. That includes being more transparent and sharing our story directly with you.
How does advertising support your business? And do you sell people’s information to advertisers?
We strongly believe that having relevant advertising to fund a free product that anyone can use is a good thing. The film completely mischaracterizes the purpose, practice, and development of our ads business.
- We strive to make ads relevant for people, and we’ve built industry-leading tools to give people transparency and control over this process.
- The real shift we made in 2012 was about adapting to what consumers were doing, not what advertisers wanted. It was about people switching from desktop computers to mobile phones.
- Of course, we’ve improved our advertising tools over time to make ads better for people and marketers – but we’ve always done this in a way that protects people’s privacy.
Targeted ads mean better ads. One of the top things people tell us about our ads is that they want them to be relevant — just like they want to see the most relevant posts in their feed. Our goal is for ads to be as good as the best posts from your friends. This requires ads to be well targeted — and we find that when we give people transparency about why they’re seeing those ads and ways to control which ads they see, we’re able to give them a much better experience. Targeted advertising also lets businesses, especially smaller ones that can’t afford other forms of advertising like TV, find new customers and grow.
We don’t sell people’s information. If people don’t think Facebook is a worthwhile use of their time, or that we don’t protect their privacy, they won’t use our service — and neither will advertisers. Our business model is, and always has been, about connecting people and businesses with relevant marketing messages while protecting people’s privacy. For instance, advertisers tell us the types of audiences they want to reach, either based on customer information they already have or general categories like “women 18-25 who are interested in soccer.” We then try to deliver ads to those people, based on people’s actions — for instance, because they’ve liked Pages about soccer — without needing to tell advertisers who those individual people are. Advertisers learn things like how many people saw their ads or what percentage of those people were women. This is how the vast majority of online ad platforms work.
Is Facebook hoarding and misusing my data?
Beyond how we do and don’t use information for advertising, we’d like to clear up how we use information you share on and off our platform, and with apps you use. It’s outlined in this post, but we’ve also included some of that explanation below. Bottom line: we don’t sell people’s information.
Many websites and apps use Facebook services to make their content and ads more engaging and relevant. These services include:
- Social plugins, such as our Like and Share buttons, which make other sites more social and help you share content on Facebook;
- Facebook Login, which lets you use your Facebook account to log into another website or app;
- Facebook Analytics, which helps websites and apps better understand how people use their services; and
- Facebook ads and measurement tools, which enable websites and apps to show ads from Facebook advertisers, to run their own ads on Facebook or elsewhere, and to understand the effectiveness of their ads.
Apps and websites that use our services, such as the Like button or Facebook Analytics, send us information to make their content and ads better. Facebook uses the information we get from other websites and apps to provide our services to these sites or apps; improve safety and security on Facebook; and enhance our own products and services.
We require websites and apps who use our tools to tell you they’re collecting and sharing your information with us, and to comply with privacy laws. We also give you a number of industry-leading controls over the way this data is used to provide more relevant content and ads:
- News Feed preferences lets you choose which content you see first and hide content you don’t want to see in your feed.
- Ad preferences shows you the advertisers whose ads you might be seeing because you visited their sites or apps. You can remove any of these advertisers to stop seeing their ads.
- In addition, you can control whether we can use information we have received from other websites and apps — so you can never see ads on Facebook based on information we have received from other websites and apps.
- Finally, if you don’t want us to use your Facebook interests to show you ads on other websites and apps, there’s a control for that, too.
- This is all different from, for instance, experiences like direct mail that use people’s information to deliver catalogues, without any way to control this experience.
How has Facebook changed its platform?
We’ve also taken action to protect the information people share with third-party apps built on our platform by enforcing our platform policy, including suspending and banning apps that do not adhere to these policies. We announced a number of changes to the platform in 2014 to restrict developer access to friends’ information, and we’ve taken additional steps since to dramatically reduce the information apps can access. Everyone on Facebook can access their app settings, where they can see what apps they have shared information with, and easily remove an app’s access if needed. This is an ongoing effort to proactively protect people and we are making major investments in people and technology to help ensure that people can safely share their information with apps and websites.
Were you blind to reports of misuse on your platform in the Ukraine? Why didn’t you do more to address concerns?
Facebook is a platform that has given voice to people around the world — activists and advocates, marginalized communities, political figures, students and teachers. That is at the core of what we do and why we exist.
But we know we were too idealistic about the nature of these connections and didn’t focus enough on preventing abuse or thinking through all the ways people could use the tools on the platform to do harm. We don’t want Facebook to be used to spread misinformation.
We should have acted faster and in a more comprehensive way. We recognize the responsibility we have, and we commit to taking aggressive and proactive steps to get this right moving forward.
We have, for example, grown our safety and security teams, including content review teams, to 20,000 as part of our larger investment. We are taking more aggressive action on networks of accounts that are set up to mislead others about who they are or what they’re doing. And improvements in artificial intelligence have allowed us to detect and remove fake accounts, often times before they are even created.
The challenges we face around the world will continue to evolve. We, in turn, will remain steadfast in our commitment to combat them and take all steps necessary to protect our community and mitigate abuse of Facebook.
On the specific claims in the film that officials in the Ukraine warned us of Russian interference in 2015:
- We regularly engage with the Ukrainian government on a range of topics — among them, content moderation, terrorism, online safety, fake news and election interference.
- Facebook officials traveled to Kiev twice to meet with representatives of the Ukrainian government and understand their concerns. The conversations they had in 2014 and 2015 were about our handling of reports by Russians to get Ukrainian content taken down from our platform, not about fake news or attempts to spread messages.
- After looking into this thoroughly, we explained — both privately and publicly — that we had applied our Community Standards equally to both Ukrainian and Russian posts that violated our hate speech policies.
- The reporting behavior we observed at the time bore no resemblance to information operations we observed in connection to the 2016 US election. The Ukrainian government has more recently raised the issue of fake news with us, but this was only in 2017 after the US election.
Update on October 30, 2018 at 10:30PM PT:
Did you prioritize growth over safety and security by expanding into countries like Myanmar and the Philippines?
The safety of the people who use Facebook has always been our priority. The content policies we write, for example, are rooted first and foremost in the principle of safety.
As Monika Bickert, our Head of Global Policy Management, states in the film, we met with civil society organizations far before 2015. This statement is confirmed by Maria Ressa, CEO of Rappler, and entrepreneur David Madden, both of whom met with Facebook employees and executives prior to 2015. When people like Ressa and Madden flagged issues to us in past years, our policy and operations teams reviewed the specific pieces of content and we removed anything that violated our Community Standards.
What we hadn’t done until recently is proactively investigate coordinated abuse and networks of bad actors and bad content on our platform. We’ve invested significantly, including by hiring experts like Nathaniel Gleicher, who leads our cybersecurity policy team after past experience with the National Security Council. We also established a dedicated team across product, engineering and policy to work on issues specific to Myanmar and the Philippines.
As a result, in the last year, we have rolled out better reporting tools, adopted a new policy to tackle misinformation that has the potential to contribute to offline harm, reduced our response times on reported content, and improved proactive detection of hate speech. We know there is more we need to do, and we will continue to invest in places like Myanmar and the Philippines.
Is Facebook prepared for the midterm elections in the US?
Free and fair elections are the heart of every democracy. During the 2016 election, we were actively looking for traditional cyberattacks, and we found them. What we didn’t find until later were foreign actors running coordinated campaigns to interfere with America’s democratic process. Since then, we’ve focused on improving our defenses and making it much harder for anyone to interfere in elections.
We have done extensive work over the past two years, including working around the clock to protect other global elections on Facebook by monitoring for threats and proactively addressing issues as they arise. Our overall goal has been to take a comprehensive approach to our elections integrity efforts — to make it as difficult as possible to interfere with elections on our platform. Here are some of the measures we’re taking:
- Staffing: We’ve more than doubled the number of people who work on safety and security, from 10,000 to over 20,000.
- Fake accounts: We’re cracking down on fake accounts. Every day we block or disable more than 1 million fake accounts at the point of creation so they can’t be used to spread spam, false news or inauthentic ads.
- Misinformation: We’re working to reduce the spread of false news by disrupting economic incentives and cutting bad actors off at the source by reducing the distribution of Pages and domains that repeatedly share misinformation. We pass potentially false posts to independent fact-checkers such as the Associated Press to review, and we demote posts they find to false, which means they lose 80% of future traffic. We’ve also recently expanded the ability for all of our fact-checkers to fact-check photo and video content in addition to the articles they were fact-checking previously, and we’re increasing the impact of fact-checking by using new content-matching techniques that help us identify duplicates of debunked stories.
- Removing bad actors: We’re disrupting bad actors and have removed thousands of Pages, groups and accounts involved in coordinated inauthentic behavior, including recent takedowns that involved a network of Pages, Groups and accounts from Iran targeting people in the US and UK. We’ve also recently removed networks of accounts and Pages that were violating our spam policies by posting the same clickbait posts in dozens of Facebook Groups, often hundreds of times in a short period, or used fake accounts to generate fake likes and shares to artificially inflate engagement.
- Ads transparency: We’re setting a new standard for ads transparency with verification that requires anyone who wants to run political or issue ads in the United States on Facebook to verify their identity and location, in the same way as TV or newspaper advertisements. We’ve also established a seven year public archive, which anyone can search to see how much was spent on each individual ad and the audience it reached.
- Comprehensive war room effort: We’ve set up a elections war room staffed with team members from across the company who work on these issues, including threat intelligence, data science, software engineering, research, operations, legal, policy, communications and others. It also includes representatives from WhatsApp and Instagram. This initiative builds on the two years of work and investments since 2016. As part of the war room effort, we’ve done detailed scenario-planning in the lead up to these elections — with the expectation that the war room team will be able to address the vast majority of issues that could arise without escalation or delay. War room teams are continually monitoring both external and internal information sources to track activity on Facebook and monitor for any anomalies.
We’ve made a lot of progress, as our work during the French, German, Mexican and Italian elections has shown. The investments we continue to make in people and technology will help us improve even further. But companies such as Facebook face sophisticated, well-funded adversaries who are getting smarter over time, too. It’s an arms race, and it will take the combined forces of the US private and public sectors to protect America’s democracy from outside interference.
If you have further questions about issues raised in the film that we did not address above, please contact us.