Meta

Protecting Teens and Their Privacy on Facebook and Instagram

Takeaways

  • We’re introducing updates on Facebook and Instagram that further protect teens from online harm.
  • Starting now, everyone who is under the age of 16 (or under 18 in certain countries) will be defaulted into more private settings when they join Facebook.
  • We’re developing new tools and education to stop the spread of self-generated intimate images online.

Today, we’re sharing an update on how we protect young people from harm and seek to create safe, age-appropriate experiences for teens on Facebook and Instagram. 

Updates to Limiting Unwanted Interactions

Last year, we shared some of the measures we take to protect teens from interacting with potentially suspicious adults. For example, we restrict people over 19 years old from sending private messages to teens who don’t follow them. (Updated on June 27, 2023 at 7:45AM PT to clarify that we permit people aged 18 and 19 to send private messages to their peers within a two-year age gap to allow for connections between classmates and friends — for example, a 19 year old may message teens aged 17 and older.)

In addition to our existing measures, we’re now testing ways to protect teens from messaging suspicious adults they aren’t connected to, and we won’t show them in teens’ People You May Know recommendations. A “suspicious” account is one that belongs to an adult that may have recently been blocked or reported by a young person, for example. As an extra layer of protection, we’re also testing removing the message button on teens’ Instagram accounts when they’re viewed by suspicious adults altogether. 

Encouraging Teens to Use Our Safety Tools

We’ve developed a number of tools so teens can let us know if something makes them feel uncomfortable while using our apps, and we’re introducing new notifications that encourage them to use these tools.

For example, we’re prompting teens to report accounts to us after they block someone, and sending them safety notices with information on how to navigate inappropriate messages from adults. In just one month in 2021, more than 100 million people saw safety notices on Messenger. We’ve also made it easier for people to find our reporting tools and, as a result, we saw more than a 70% increase in reports sent to us by minors in Q1 2022 versus the previous quarter on Messenger and Instagram DMs.

Product mock of default safety notifications on Instagram

Product mock of default safety notifications on Messenger

New Privacy Defaults for Teens on Facebook

Starting today, everyone who is under the age of 16 (or under 18 in certain countries) will be defaulted into more private settings when they join Facebook, and we’ll encourage teens already on the app to choose these more private settings for: 

  • Who can see their friends list
  • Who can see the people, Pages and lists they follow 
  • Who can see posts they’re tagged in on their profile
  • Reviewing posts they’re tagged in before the post appears on their profile
  • Who is allowed to comment on their public posts

Product mock of privacy default notifications and settings on Facebook

Product mock of privacy default notifications and settings on Facebook

This move comes on the heels of us rolling out similar privacy defaults for teens on Instagram and aligns with our safety-by-design and Best Interests of the Child’ framework

New Tools to Stop the Spread of Teens’ Intimate Images

We’re also sharing an update on the work we’re doing to stop the spread of teens’ intimate images online, particularly when these images are used to exploit them — commonly known as “sextortion.” The non-consensual sharing of intimate images can be extremely traumatic and we want to do all we can to discourage teens from sharing these images on our apps in the first place.

We’re working with the National Center for Missing and Exploited Children (NCMEC) to build a global platform for teens who are worried intimate images they created might be shared on public online platforms without their consent. This platform will be similar to work we have done to prevent the non-consensual sharing of intimate images for adults. It will allow us to help prevent a teen’s intimate images from being posted online and can be used by other companies across the tech industry. We’ve been working closely with NCMEC, experts, academics, parents and victim advocates globally to help develop the platform and ensure it responds to the needs of teens so they can regain control of their content in these horrific situations. We’ll have more to share on this new resource in the coming weeks.

We’re also working with Thorn and their NoFiltr brand to create educational materials that reduce the shame and stigma surrounding intimate images, and empower teens to seek help and take back control if they’ve shared them or are experiencing sextortion.

Product mock of information on how to protect children

Product mock of information on how to protect children

We found that more than 75% of people that we reported to NCMEC for sharing child exploitative content shared the content out of outrage, poor humor, or disgust, and with no apparent intention of harm. Sharing this content violates our policies, regardless of intent. We’re planning to launch a new PSA campaign that encourages people to stop and think before resharing those images online and to report them to us instead. 

Anyone seeking support and information related to sextortion can visit our education and awareness resources, including the Stop Sextortion hub on the Facebook Safety Center, developed with Thorn. We also have our guide for parents on how to talk to their teens about intimate images on the Education hub of our Family Center.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy