Meta

Our Approach to Addressing Bullying and Harassment

By ​​Antigone Davis, Global Head of Safety and Amit Bhattacharyya, Product Management Director
  • We’re adding the prevalence of bullying and harassment to our Community Standards Enforcement Report for the first time. 
  • Bullying and harassment is a unique challenge and one of the most complex issues to address because context is critical. We work hard to enforce against this content while also equipping our community with tools to protect themselves in ways that work best for them.
  • Prevalence over time shows us how we are doing at reducing this problem across our platforms and gives us a benchmark to work against. 

As a social technology company, helping people feel safe to connect and engage with others is central to what we do. But one form of abuse that has unfortunately always existed when people interact with each other is bullying and harassment. While this challenge isn’t unique to social media, we use new technologies to reduce it on our own platform, give people tools to protect themselves and also measure how we are doing. Here is how we approach it.

How We Define Bullying and Harassment

When it comes to bullying and harassment, ​​context and intent matter. Bullying and harassment are often very personal — it shows up in different ways for different people, from making threats to make personally identifiable information public, to making repeated and unwanted contact. Our Bullying and Harassment policy prohibits this type of content and behavior. Our policy also distinguishes between public figures and private individuals, since we want to allow free and open discussions about public figures. We permit more critical commentary of public figures than we do for private individuals, but any content must still remain within the limits outlined by our Community Standards.

What Our Bullying and Harassment Metric Means

We have developed AI systems that can identify many types of bullying and harassment across our platforms. However, bullying and harassment is a unique issue area because determining harm often requires context, including reports from those who may experience this behavior. It can sometimes be difficult for our systems to distinguish between a bullying comment and a light-hearted joke without knowing the people involved or the nuance of the situation. For example, a female friend posting “hi slut” on another female friend’s profile may not be perceived as bullying between those close friends. But on the other hand, someone posting that on another person’s page when the two are not close friends or several people posting “hi slut” on that person’s page can be harmful. Derogatory terms related to sexual activity, like “slut,” violate our Community Standards for bullying and harassment because we want to ensure a baseline of safety for all members of our community, regardless of intent.  

As a result, detecting such bullying can be more challenging than other types of violations. While we are always working to improve our technology, our metrics, particularly those for proactive rate and prevalence reflect the reality of having to rely on reports from our community. 

In the third quarter this year, the prevalence of bullying and harassment was 0.14-0.15% on Facebook and 0.05-0.06% on Instagram. This means bullying and harassment content was seen between 14 and 15 times per every 10,000 views of content on Facebook and between 5 and 6 times per 10,000 views of content on Instagram. This metric captures only bullying and harassment where we do not need additional information such as a report from the person experiencing it to determine if it violates our policy. Additionally, we removed 9.2 million pieces of content on Facebook, and of what we removed, we found 59.4% proactively. On Instagram, we removed 7.8 million pieces of content on Instagram, and of what we removed, we found 83.2% of it proactively. 

How We Plan to Reduce Bullying and Harassment

Prevalence metrics allow us to track, both internally and externally, how much violating content people are seeing on our apps. Prevalence, in turn, helps us determine the right approaches to driving that metric down, whether it’s through updating our policies, products or tools for our community. When we first released the prevalence of hate speech, for example, it was between 0.10-0.11% and it is now 0.03% today. This year, we’ve updated our AI technology to train across three different but related violations: bullying and harassment, hate speech and violence and incitement. We also deploy several approaches to reduce the prevalence of violating content, such as: removing accounts, Pages, Groups and events for violating our Community Standards or Guidelines; filtering problematic Groups, Pages and content from recommendations across our services; or reducing the distribution of likely violating content or content borderline to our Community Standards. 

To reduce the prevalence of bullying and harassment on our platforms we use tactics that are specific to the unique nature of this violation type, and some that bridge learnings across numerous policy issues. In further studying prevalence, we expect to make continued refinements to our bullying and harassment policies and tools to better identify and enforce on this content. For example, we recently updated our policies to increase enforcement against harmful content and behavior for both private individuals and public figures. These updates came after years of consultation with free speech advocates, human rights experts, women’s safety groups, cartoonists and satirists, female politicians and journalists, representatives of the LGBTQ+ community, content creators and other types of public figures.

  • To protect people from mass harassment and intimidation without penalizing people engaging in legitimate forms of communication, we’ve updated our policy to remove certain content and accounts linked to this behavior. To enforce these changes, we will require additional information or context. For example, a state sponsored group in an authoritarian regime using closed private groups to coordinate mass posting on dissident profiles would now be enforced against under this brigading policy.
  • We’ve expanded our protections for public figures to include the removal of degrading, sexualizing attacks. For example, pornographic cartoons based on women elected officials would be removed for violating this new policy. 
  • Recognizing that not everyone in the public eye chooses to become a public figure and yet can still be on the receiving end of significant bullying and harassment, we’ve increased protections for involuntary public figures like human rights defenders and journalists. For example, content that mocks the menstruation cycle of a woman journalist would violate our policies and be enforced against.

We are working hard to reduce this type of content on our platforms, but we also want to equip our community with tools to protect themselves from potentially offensive content in ways that work best for them. 

  • Tools like unfriending, unfollowing and blocking accounts can help immediately disconnect people from those who may be bullying them. 
  • Features like ignore and comment controls can limit the amount of unwanted content seen, while Restrict on Instagram lets people prevent bullies from seeing and interacting with someone’s content, without the bully being aware. 
  • People can also control who tags or mentions them in content. 
  • Limits on Instagram also let people automatically hide comments and Direct Message requests from people who don’t follow you or who only recently followed you.
  • People can also easily report content across posts, comments and messages so that our teams can review and take action as appropriate.

One recent tool we’ve deployed on both Facebook and Instagram is adding warning screens to educate and discourage people from posting or commenting in ways that could be bullying and harassment. On Instagram, about 50% of the time the comment was edited or deleted by the user based on these warnings. We’ve made investments in our bullying and harassment tools on Instagram, working to reduce the number of times this type of content is seen particularly by our younger user base. 

We have a number of resources where you can learn more about the bullying and harassment prevention work we do on our platform, including our Bullying Prevention Hub on the Safety Center, a resource for teens, parents and educators seeking support and help for issues related to bullying and other conflicts.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy