Meta

F8 2018: Using Technology to Remove the Bad Stuff Before It’s Even Reported

By Guy Rosen, VP of Product Management

There are two ways to get bad content, like terrorist videos, hate speech, porn or violence off Facebook: take it down when someone flags it, or proactively find it using technology. Both are important. But advances in technology, including in artificial intelligence, machine learning and computer vision, mean that we can now:

It’s taken time to develop this software – and we’re constantly pushing to improve it. We do this by analyzing specific examples of bad content that have been reported and removed to identify patterns of behavior. These patterns can then be used to teach our software to proactively find other, similar problems.

When I talk about technology like artificial intelligence, computer vision or machine learning people often ask why we’re not making progress more quickly. And it’s a good question. Artificial intelligence, for example, is very promising but we are still years away from it being effective for all kinds of bad content because context is so important. That’s why we have people still reviewing reports.

And more generally, the technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. It’s why we can typically do more in English as it is the biggest data set we have on Facebook.

But we are investing in technology to increase our accuracy across new languages. For example, Facebook AI Research (FAIR) is working on an area called multi-lingual embeddings as a potential way to address the language challenge. And it’s why we sometimes may ask people for feedback if posts contain certain types of content, to encourage people to flag it for review. And it’s why reports that come from people who use Facebook are so important – so please keep them coming. Because by working together we can help make Facebook safer for everyone.