Meta

Making Facebook a Safer, More Welcoming Place for Women

Dr. Ranjana Kumari, Director of Centre for Social Research, India

We take a comprehensive approach to making our platform a safer place for women, including writing clear policies, engaging with experts and developing cutting-edge technology to help prevent abuse from happening in the first place.

At Facebook, we believe that women should have equal access to all of the economic opportunity, education and social connection the internet provides.

“It’s a civil liberties and civil rights issue to be able to access loved ones any time, access information, access job searches,” says Cindy Southworth. “The world is out there, and you need access to technology to access that world.”

Southworth is the Executive Vice President of the US-based National Network to End Domestic Violence. As a member of the Facebook Safety Advisory Board, her organization works with us to make our platform a safer, more welcoming place for women. Southworth has spent almost 30 years working with domestic violence victims and is keenly aware of how social media can be both an abuser’s tool and a victim’s lifeline — which is why at Facebook, we work to reduce the abuse and harassment that can keep women offline while building tools and resources to empower them online. 

We take a comprehensive approach to making our platform a safer place for women, including writing clear policies and developing cutting-edge technology to help prevent abuse from happening in the first place.

The Facebook Community Standards and the Instagram Community Guidelines outline the rules for what is and isn’t allowed on Facebook and Instagram. They’re developed by our policy teams and include rules against behaviors that disproportionately impact women, such as the sharing of non-consensual intimate imagery, which is illegal in many places around the world. They also include rules against harassment, like sending multiple unwanted messages to a person who’s made it clear that they don’t want to receive them. 

As we update and refine our policies to keep pace with changes happening online and offline around the world, we regularly engage with outside experts. In July, for example, our policy team expanded our hate speech policy on targeted cursing, profanity used to attack a private individual, to include female-gendered terms. In crafting our recommendation, we spoke to experts around the world, including anthropological and cognitive linguists, women’s rights organizations and safety organizations. 

These rules apply to everyone who uses Facebook and Instagram. And because cultural norms around things like sexuality, friendships and women’s roles in society can differ so widely, these conversations with global experts help us understand how abuse and harassment manifest differently in different places.

“The general themes when it comes to women’s safety tend to be the same around the world,” says Monika Bickert, Facebook’s Vice President of Global Policy Management. “But we find that when we look at specific countries or regions, the actual types of behavior are very localized.”

For example, harassers will commonly try to humiliate a woman by sharing images of her that would be considered shameful in her community. In the US, a harasser might select a nude photo or a video of the woman engaging in a sexual activity. We’d remove the image according to our standards on the sexual exploitation of adults. In some countries, a woman could be shamed or put at risk if someone shared a photo of her ankle, or a photo of her walking with a man who isn’t a family member. If someone shares this in a way that makes clear they’re trying to humiliate her, that would fall under our standards on bullying and harassment.  

To account for this wide spectrum of harassment types, our rules need to be thoughtful and similarly comprehensive.

In addition to policies, we develop technology to fight behavior that threatens to keep women offline. We offer a number of tools to help people control their experience on Facebook, such as ignoring unwanted messages and blocking other people without them being notified. Victims and their allies can also report violating behavior, and we will remove anything that doesn’t follow our policies.

Global research helps us better serve people around the world in this area as well. In India, both people who use our services and safety advocates told us that some women in the country choose not to share profile pictures that include their faces because they’re concerned that someone might try to take those photos and impersonate them in ways that would shame or dishonor them or their families. So we developed an optional profile picture guard that gives women more control over who can download or share their pictures. This is available in India, Pakistan, Egypt and other countries where women have similar concerns. 

Blocking, reporting and other user-facing tools are only part of the solution and their success relies on people knowing to seek them out and understanding how to use them — plus feeling comfortable enough to use them. A victim who’s already feeling anxious or threatened may not want to trigger a harasser for fear of retribution. Sometimes, the behavior isn’t visible to the woman it affects: an ex might share non-consensual intimate images in a private group, for example. Or a bully might set up a fake account in a woman’s name and operate it without her knowledge, adding members of her community as friends. That’s why Facebook is not only investing in digital literacy programs and improved safety resources but we’re also investing in technology that can find violating content proactively — and in some cases, prevent it from being shared in the first place.

Facebook’s work to prevent the non-consensual sharing of intimate images, or NCII, serves as a clear example of this comprehensive approach to deploying technology. In 2017, we launched a pilot program to help potential victims prevent their intimate images from appearing on Facebook and Instagram without their consent. As part of this program, people who fear their images are in danger of being shared can reach out to a victim advocate organization, where someone will help them securely submit those photos to Facebook. On our end, we create digital fingerprints of the images before destroying them. The images are never seen by anyone at Facebook before they are deleted. Then, using photo-matching technology, we block the images from being posted to Facebook and Instagram. 

Of course, the pilot program only works if the potential victim is aware that their images are at risk of being shared. In cases where an intimate image is shared to Facebook and Instagram and then reported to us after the fact, we can minimize the damage by using the same combination of digital fingerprinting and photo-matching technology so they don’t get shared again. 

But we wanted to do even more. That’s why we developed machine-learning and artificial-intelligence techniques to proactively detect nude or near-nude images and videos shared without permission — without anyone having to report them. 

Comprehensive approaches to complicated problems, like the one we’ve developed for NCII, require a lot of input and a lot of expertise, and we know we can’t do this alone. That’s why we host roundtable discussions around the world with women’s safety experts, women who have experienced some of these issues and women’s advocates to ensure we’re including their feedback, perspectives and expertise in our work.

One example of this kind of collaborative work is the annual Global Safety and Well-Being Summit, which brought together over 100 organizations from 40 countries this year to discuss women’s safety as well as other topics such as suicide prevention and raising children in the digital era.

Nighat Dad, Executive Director of the Digital Rights Foundation in Pakistan, attended the 2019 summit in New York, and has worked with Facebook on issues related to harassment, including the sharing of non-consensual intimate images.

“Online gender-based violence — it’s not a technology problem, it’s a societal problem,” said Dad. “The people who are working on the ground, they need to work together on this and also keep telling social media platforms how they can improve their products, how they can improve their reporting mechanisms. It’s not just one person who can address the issue, or one organization or one institution, we all need to work together.”

Resources

Learn More