Meta

Enforcing Against Manipulated Media

By Monika Bickert, Vice President, Global Policy Management

People share millions of photos and videos on Facebook every day, creating some of the most compelling and creative visuals on our platform. Some of that content is manipulated, often for benign reasons, like making a video sharper or audio more clear. But there are people who engage in media manipulation in order to mislead. 

Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or “deep learning” techniques to create videos that distort reality – usually called “deepfakes.” While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.

Today we want to describe how we are addressing both deepfakes and all types of manipulated media. Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.

Collaboration is key. Across the world, we’ve been driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform our policy development and improve the science of detecting manipulated media. 

As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:

  • It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words. 

Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech.

Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.

This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.

Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts. Just last month, we identified and removed a network using AI-generated photos to conceal their fake accounts. Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior. 

We are also engaged in the identification of manipulated content, of which deepfakes are the most challenging to detect. That’s why last September we launched the Deep Fake Detection Challenge, which has spurred people from all over the world to produce more research and open source tools to detect deepfakes. This project, supported by $10 million in grants, includes a cross-sector coalition of organizations including the Partnership on AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC and AWS, among several others in civil society and the technology, media and academic communities.

In a separate effort, we’ve partnered with Reuters, the world’s largest multimedia news provider, to help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course. News organizations increasingly rely on third parties for large volumes of images and video, and identifying manipulated visuals is a significant challenge. This program aims to support newsrooms trying to do this work. 

As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy