Meta

Our Approach to Addressing Bullying and Harassment

As a social technology company, helping people feel safe to connect and engage with others is central to what we do. But one form of abuse that has unfortunately always existed when people interact with each other is bullying and harassment. While this challenge isn’t unique to social media, we use new technologies to reduce it on our own platform, give people tools to protect themselves and also measure how we are doing. Here is how we approach it.

How We Define Bullying and Harassment

When it comes to bullying and harassment, ​​context and intent matter. Bullying and harassment are often very personal — it shows up in different ways for different people, from making threats to make personally identifiable information public, to making repeated and unwanted contact. Our Bullying and Harassment policy prohibits this type of content and behavior. Our policy also distinguishes between public figures and private individuals, since we want to allow free and open discussions about public figures. We permit more critical commentary of public figures than we do for private individuals, but any content must still remain within the limits outlined by our Community Standards.

What Our Bullying and Harassment Metric Means

We have developed AI systems that can identify many types of bullying and harassment across our platforms. However, bullying and harassment is a unique issue area because determining harm often requires context, including reports from those who may experience this behavior. It can sometimes be difficult for our systems to distinguish between a bullying comment and a light-hearted joke without knowing the people involved or the nuance of the situation. For example, a female friend posting “hi slut” on another female friend’s profile may not be perceived as bullying between those close friends. But on the other hand, someone posting that on another person’s page when the two are not close friends or several people posting “hi slut” on that person’s page can be harmful. Derogatory terms related to sexual activity, like “slut,” violate our Community Standards for bullying and harassment because we want to ensure a baseline of safety for all members of our community, regardless of intent.  

As a result, detecting such bullying can be more challenging than other types of violations. While we are always working to improve our technology, our metrics, particularly those for proactive rate and prevalence reflect the reality of having to rely on reports from our community. 

In the third quarter this year, the prevalence of bullying and harassment was 0.14-0.15% on Facebook and 0.05-0.06% on Instagram. This means bullying and harassment content was seen between 14 and 15 times per every 10,000 views of content on Facebook and between 5 and 6 times per 10,000 views of content on Instagram. This metric captures only bullying and harassment where we do not need additional information such as a report from the person experiencing it to determine if it violates our policy. Additionally, we removed 9.2 million pieces of content on Facebook, and of what we removed, we found 59.4% proactively. On Instagram, we removed 7.8 million pieces of content on Instagram, and of what we removed, we found 83.2% of it proactively. 

How We Plan to Reduce Bullying and Harassment

Prevalence metrics allow us to track, both internally and externally, how much violating content people are seeing on our apps. Prevalence, in turn, helps us determine the right approaches to driving that metric down, whether it’s through updating our policies, products or tools for our community. When we first released the prevalence of hate speech, for example, it was between 0.10-0.11% and it is now 0.03% today. This year, we’ve updated our AI technology to train across three different but related violations: bullying and harassment, hate speech and violence and incitement. We also deploy several approaches to reduce the prevalence of violating content, such as: removing accounts, Pages, Groups and events for violating our Community Standards or Guidelines; filtering problematic Groups, Pages and content from recommendations across our services; or reducing the distribution of likely violating content or content borderline to our Community Standards. 

To reduce the prevalence of bullying and harassment on our platforms we use tactics that are specific to the unique nature of this violation type, and some that bridge learnings across numerous policy issues. In further studying prevalence, we expect to make continued refinements to our bullying and harassment policies and tools to better identify and enforce on this content. For example, we recently updated our policies to increase enforcement against harmful content and behavior for both private individuals and public figures. These updates came after years of consultation with free speech advocates, human rights experts, women’s safety groups, cartoonists and satirists, female politicians and journalists, representatives of the LGBTQ+ community, content creators and other types of public figures.

We are working hard to reduce this type of content on our platforms, but we also want to equip our community with tools to protect themselves from potentially offensive content in ways that work best for them. 

One recent tool we’ve deployed on both Facebook and Instagram is adding warning screens to educate and discourage people from posting or commenting in ways that could be bullying and harassment. On Instagram, about 50% of the time the comment was edited or deleted by the user based on these warnings. We’ve made investments in our bullying and harassment tools on Instagram, working to reduce the number of times this type of content is seen particularly by our younger user base. 

We have a number of resources where you can learn more about the bullying and harassment prevention work we do on our platform, including our Bullying Prevention Hub on the Safety Center, a resource for teens, parents and educators seeking support and help for issues related to bullying and other conflicts.