Meta

Our Work To Fight Online Predators

Takeaways

  • Child exploitation is a horrific crime, and predators are determined criminals who test app, website and platform defenses.
  • In addition to developing technology to tackle this abuse, we hire specialists dedicated to online child safety and we share information with our industry peers and law enforcement.
  • Predators don’t limit their attempts to harm children to online spaces, so it’s vital that we work together to stop predators and prevent child exploitation.

Update on January 30, 2024 at 8:10AM PT:

We’re providing an update on the efficacy of our work enforcing our child safety policies.

  • We automatically disable accounts if they exhibit a certain number of the 60+ signals we monitor for potentially suspicious behavior. We identified and removed more than 90,000 accounts from August 1, 2023 to December 31, 2023 as a result of this method. 
  • From July 1, 2023 to December 31, 2023, we reviewed and removed over 21,000 Facebook Groups that violate our child safety policies.
  • Between 2020 and 2023, our teams disrupted 37 abusive networks and removed nearly 200,000 accounts associated with those networks.
  • From August 1, 2023 to December 31, 2023, we disabled more than 2.6 million accounts for violating our child sexual exploitation policies. 
  • Between August 29 and December 31, 2023, we took action on over 2.2 million accounts on Facebook and over 1.4 million accounts on Instagram, as they were linked to accounts that violated our child safety policies.

Originally published on December 1, 2023:

Preventing child exploitation is one of the most important challenges facing our industry today. Online predators are determined criminals who use multiple apps and websites to target young people. They also test each platform’s defenses, and they learn to quickly adapt. That’s why now, as much as ever, we’re working hard to stay ahead. In addition to developing technology that roots out predators, we hire specialists dedicated to online child safety and we share information with our industry peers and law enforcement.

We take recent allegations about the effectiveness of our work very seriously, and we created a task force to review existing policies; examine technology and enforcement systems we have in place; and make changes that strengthen our protections for young people, ban predators, and remove the networks they use to connect with one another. The task force took immediate steps to strengthen our protections, and our child safety teams continue to work on additional measures. Today, we’re sharing an overview of the task force’s efforts to date.

An Overview of Meta’s Child Safety Task Force

Meta’s Child Safety Task Force focused on three areas: Recommendations and Discovery, Restricting Potential Predators and Removing Their Networks, and Strengthening Our Enforcement.

Recommendations and Discovery

We make recommendations in places like Reels and Instagram Explore to help people discover new things on our apps, and people use features like Search and Hashtags to find things they might be interested in. Given we’re making suggestions to people in these places, we have protections in place to help ensure we don’t suggest something that may be upsetting or that may break our rules. We have sophisticated systems that proactively find, remove, or refrain from suggesting content, groups and pages, among other things, that break our rules or that may be inappropriate to recommend to people. Our Child Safety Task Force improved these systems by combining them and expanding their capabilities. This work is ongoing, and we expect it to come into effect fully in the coming weeks. 

Here’s what we did:

  • We expanded the existing list of child safety related terms, phrases and emojis for our systems to find. We have many sources for these terms, including non-profits and experts in online safety, our specialist child safety teams who investigate predatory networks to understand the language they use, and our own technology which finds misspellings or spelling variations of these terms.
  • We also began using novel techniques to find new terms. For example, we’re using machine learning technology to find relationships between terms that we already know could be harmful or that break our rules and other terms used at the same time. These could be terms searched for in the same session as violating terms, or other hashtags used in a caption that contains a violating hashtag. 
  • We combined our systems so that as new terms are added to our central list, they will be actioned across Facebook and Instagram simultaneously. For example, we may send Instagram accounts, Facebook Groups, Pages and Profiles to content reviewers, restrict these terms from producing results in Facebook and Instagram Search, and block hashtags that include these terms on Facebook and Instagram.

Restricting Potential Predators and Removing Their Networks

We’ve developed technology that identifies potentially suspicious adults, and we review more than 60 different signals to find these adults, such as if a teen blocks or reports an adult, or if someone repeatedly searches for terms that may suggest suspicious behavior. We already use this technology to limit potentially suspicious adults from finding, following or interacting with teens, and we’ve expanded it to prevent these adults from finding, following or interacting with one another. 

Here’s what we did:

  • On Instagram, potentially suspicious adults will be prevented from following one another, will not be recommended to each other in places like Explore and Reels, and will not be shown comments from one another on public posts, among other things.
  • On Facebook, we’re using this technology to better find and address certain Groups, Pages and Profiles. For example, Facebook Groups with a certain percentage of members that exhibit potentially suspicious behavior will not be suggested to others in places like Groups You Should Join. Additionally, Groups whose membership overlaps with other Groups that were removed for violating our child safety policies will not be shown in Search. As a result of this work, since July 1, 2023, we removed more than 190,000 Groups from Search.
  • We also hire specialists with backgrounds in law enforcement and online child safety to find predatory networks and remove them. These specialists monitor evolving behaviors exhibited by these networks – such as new coded language – to not only remove them, but to inform the technology we use to proactively find them. Between 2020 and 2023, our teams disrupted 32 abusive networks and removed more than 160,000 accounts associated with those networks.

Strengthening Our Enforcement

The task force also made a series of updates to strengthen our reporting and enforcement systems, and found new ways to root out and ban potentially predatory accounts. In August 2023 alone, we disabled more than 500,000 accounts for violating our child sexual exploitation policies.

  • We announced our participation in Lantern, a new program from the Tech Coalition that enables technology companies to share a variety of signals about accounts and behaviors that violate their child safety policies. Lantern participants can use this information to conduct investigations on their own platforms and take action.
  • We audited our systems and fixed technical issues we found, including an issue that unexpectedly closed user reports.
  • We improved the systems we use to prioritize reports for content reviewers. For example, we’re using technology designed to find child exploitative imagery to prioritize reports that may contain it.
  • We introduced additional ways to proactively find and remove accounts that may violate our child safety policies. For example, we’re sending Instagram accounts that exhibit potentially suspicious behavior to our content reviewers and we’ll automatically disable these accounts if they exhibit enough of the 60+ signals we monitor. More than 20,000 accounts were identified and automatically removed in August 2023 as a result of this method.
  • We provided new guidance and tools to help content reviewers understand the latest behaviors and terms used by predators, in many different languages. For example, content reviewers will now see information about coded terms used in posts they’re reviewing to understand the subtext of those terms, and how they’re used by predators. This will help content reviewers better recognize this behavior and take action. 
  • We made improvements to better find and remove Instagram accounts and Facebook Profiles that may be linked to those that violate our child safety policies – and to prevent them from creating new accounts from their device. Since the beginning of August, we automatically blocked more than 250,000 devices on Instagram for violations of our child safety policies, and improvements in device blocking have led to more than 10,000 additional enforcements on Instagram and Facebook per day.
  • We improved our proactive detection of potentially suspicious Facebook Groups and updated our protocols and review tooling so our reviewers can remove more violating Groups. Since July 1, 2023, we reviewed and removed 16,000 Groups for violations of our child safety policies.
  • After launching a new automated enforcement effort in September, we saw five times as many automated deletions of Instagram Lives that contained adult nudity and sexual activity.
  • We actioned over 4 million Reels per month, across Facebook and Instagram globally, for violating our policies.


To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy