Meta

Our Approach to Maintaining a Safe Online Environment in Countries at Risk

By Miranda Sissons, Director of Human Rights Policy, and Nicole Isaac, Strategic Response Director, International
  • We take a comprehensive approach in countries that are experiencing or at risk for conflict or violence — acting quickly to remove content that violates our policies and taking protective measures. 
  • Since 2018, we’ve built teams with expertise on issues such as human rights, hate speech, misinformation and polarization. Many have lived or worked in high-risk countries and speak relevant languages. 
  • We have an industry-leading process for reviewing and prioritizing countries with the highest risk of offline harm and violence, every six months. When we respond to a crisis, we deploy country-specific support as needed.
  • The complexity of these issues means there will never be a one-size-fits-all solution. Our work will never be finished and requires ongoing vigilance and investments. 

Over the past two decades, Facebook has empowered people around the world with a wealth of social and economic benefits. It has made social connection and free expression possible on a massive scale. This can be especially important for people who are in places that are experiencing conflict and violence.    

Facebook supports people’s right to express themselves freely, regardless of where they are in the world. Freedom of expression is a foundational human right and enables many other rights.  But we know that technologies for free expression, information and opinion can also be abused to spread hate and misinformation — a challenge made even worse in places where there is a heightened risk of conflict and violence. This requires developing both short-term solutions that we can implement when crises arise and having a long-term strategy to keep people safe. Here is our approach.  

Since 2018, we’ve had dedicated teams spanning product, engineering, policy, research and operations to better understand and address the way social media is used in countries experiencing conflict. Many of these individuals have experience working on conflict, human rights and humanitarian issues, as well as addressing areas like misinformation, hate speech and polarization. Many have lived or worked in the countries we’ve identified as highest risk and speak relevant languages. They are part of the over 40,000 people we have working on safety and security, including global content review teams in over 20 sites around the world reviewing content in over 70 languages. 

In the last two years, we’ve hired more people with language, country and topic expertise. For example, we’ve increased the number of team members with work experience in Myanmar and Ethiopia to include former humanitarian aid workers, crisis responders and policy specialists. And we’ve hired more people who can review content in Amharic, Oromo, Tigrinya, Somali and Burmese. Adding more language expertise has been a key focus area for us. This year alone, we’ve hired content moderators in 12 new languages, including Haitian Creole, Kirundi, Tswana and Kinyarwanda.

Evaluating Harm in Countries at Risk

Our teams have developed an industry-leading process for reviewing and prioritizing which countries have the highest risk of offline harm and violence every six months. We make these determinations in line with the UN Guiding Principles on Business and Human Rights and following a review of these factors:

  • Long-term conditions and historical context: We rely on regional experts, platform data and data from more than 60 sources like Varieties of Democracy (V-Dem), Uppsala Conflict Data Program, the United States Holocaust Memorial Museum’s Early Warning Project, the Armed Conflict Location & Event Data Project, and the World Bank to assess the long-term conditions on the ground. These can include civic participation and human rights, societal tensions and violence, and the quality of relevant information ecosystems.
  • How much the use of our products could potentially impact a country: We prioritize countries based on a number of factors, including: where our apps have become most central to society, such as in countries where a larger share of people use our products; where there’s been an increase in offline harms; and where social media adoption has grown.
  • Current events on the ground: We also give special consideration to discrete events that might magnify current societal problems, such as local risk or occurrence of atrocity crimes, elections, episodes of violence and COVID-19 vaccination and transmission rates.

Strategies for Helping to Keep People Safe in Countries At Risk

Using this prioritization process, we develop longer-term strategies to prepare for, respond to and mitigate the impacts of harmful offline events in the countries we deem most at risk. This allows us to act quickly to remove content that violates our policies and take other protective measures while still protecting freedom of expression and other human rights principles. Recent examples include our preparations for elections in Myanmar, Ethiopia, India and Mexico. 

  • Understanding and engaging with local contexts and communities: We know that working with the people and organizations on the ground with firsthand information and expertise is essential. Over the past few years, we’ve expanded these relationships with local civil society organizations to support country-specific education programs and product solutions, and to ensure our enforcement accounts for local context. We’ve also expanded our global network of third-party fact-checkers. Additionally, we have invested significant resources in more than 30 countries with active conflict or societal unrest. Together with UN partners and dozens of local and global NGOs, we have developed programming, including through global digital literacy initiatives such as We Think Digital or programs to make online engagement safer, such as Search for Common Ground’s program in central Africa.
  • Developing and evaluating policies to prohibit harmful content and behavior: We are constantly evaluating and refining our policies to address evolving nuances of hate speech, identify groups at heightened risk of violence or perpetrators of atrocities and human rights abusers, or the potential for rumors and misinformation to contribute to offline physical harm, particularly in countries where ethnic and religious tensions are present.
  • Improving our technology and enforcement to help keep our community safe: During moments when the risk of harm is greater, we may take more aggressive action. For example, ahead of elections and during periods of heightened unrest in India, Myanmar and Ethiopia, we significantly reduce the distribution of content that likely violates our policies on hate speech or incitement of violence while our teams investigate it. Once we confirm that the content violates these policies, we remove it. (Update on October 24, 2021 at 12:45PM PT: This is in addition to our proactive hate speech detection we run in Hindi, Bengali, Tamil, Urdu, Amharic, Oromo, and Burmese.) We also significantly reduce the distribution of content posted from accounts that have repeatedly posted violating content — in addition to our standard practice of removing accounts that frequently violate our Community Standards. To protect people in Afghanistan following the Taliban takeover, we launched a feature that allows them to lock their profile to provide an extra layer of privacy, security and protection for their information.
    Update on December 15, 2021 at 9:55 AM PT: In some circumstances, we may reduce the distribution of potentially violating content even when our systems predict that a specific post has a very low probability of violating our policies, while our teams investigate it. The extent of that reduction is based on the confidence of our systems’ prediction, as well as local conditions. For example, we may take less action to reduce a piece of content in News Feed that is determined to have a 25% chance of violating our policies on hate speech versus a piece of content that has a 50% percent chance of violating.

In a crisis, we will determine what kind of support and teams we need to dedicate to a particular country or language, and for how long we need to keep them in place. This might include deploying our Integrity Product Operations Centers model to monitor and respond to threats in real time. It can also include seeking to ensure our integrity systems and resources are robust and ready where there may be ongoing risk of political unrest, or building temporary product levers ahead of a protest or a culturally sensitive event — all while ensuring that we have teams ready to support unplanned events, such responding to the coup in Myanmar. 

We know that we face a number of challenges with this work and it is a complex and often adversarial space — there is no one-size-fits-all solution. Many of these offline issues have existed for decades or longer, and media services have a long history of being abused by those seeking to assert or maintain power or incite violence. But, we know our work to keep our global community safe will never be finished and it requires ongoing vigilance and investments. That’s what we’ve done for many years and we will continue doing it going forward.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy