Meta

Fighting Election Interference in Real Time

By Samidh Chakrabarti, Director of Product Management, Civic Engagement

Over the past two years, we’ve made steady progress preventing election interference on Facebook. But as our teams have gotten smarter, so have the adversaries seeking to misuse our services. So in September, ahead of the Brazilian and US elections, we opened our first physical elections war room in Menlo Park, California. Our goal: to get the right subject-matter experts from across the company in one place so they can address potential problems identified by our technology in real time and respond quickly.

The war room has over two dozen experts from across the company – including from our threat intelligence, data science, software engineering, research, community operations and legal teams. These employees represent and are supported by the more than 20,000 people working on safety and security across Facebook. When everyone is in the same place, the teams can make decisions more quickly, reacting immediately to any threats identified by our systems, which can reduce the spread of potentially harmful content. Our dashboards offer real-time monitoring on key elections issues, such as efforts to prevent people from voting, increases in spam, potential foreign interference, or reports of content that violates our policies. The team also monitors news coverage and election-related activity across other social networks and traditional media. These efforts give us a collective view and help track what type of content may go viral. To prepare, our team has also done extensive scenario-planning to game out potential threats – from harassment to voter suppression – and developed systems and procedures in advance to respond effectively.

These preparations helped a lot during the first round of Brazil’s presidential elections. For example, our technology detected a false post claiming that Brazil’s Election Day had been moved from October 7 to October 8 due to national protests. While untrue, that message began to go viral. We quickly detected the problem, determined that the post violated our policies, and removed it in under an hour. And within two hours, we’d removed other versions of the same fake news post.

In another example, after the first round of election results were called, our systems detected a spike in hate speech. After investigating, we found hateful content that appeared designed to whip up violence against people from Northeast Brazil. Our community operations team was able to remove these posts within two hours of our technology sending alerts to the team in the war room.

The work we are doing in the war room builds on almost two years of hard work and significant investments, in both people and technology, to improve security on Facebook, including during elections. Our machine learning and artificial intelligence technology is now able to block or disable fake accounts more effectively – the root cause of so many issues. We’ve increased transparency and accountability in our advertising. And we continue to make progress in fighting false news and misinformation. That said, security remains an arms race and staying ahead of these adversaries will take continued improvement over time. We’re committed to the challenge.