Meta

A Further Update on New Zealand Terrorist Attack

By Guy Rosen, VP, Product Management 

We continue to keep the people, families and communities affected by the tragedy in New Zealand in our hearts. Since the attack, we have been working directly with the New Zealand Police to respond to the attack and support their investigation. In addition, people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack, and we wanted to provide additional information from our review into how our products were used and how we can improve going forward.

Timeline

As we posted earlier this week, we removed the attacker’s video within minutes of the New Zealand Police’s outreach to us, and in the aftermath, we have people working on the ground with authorities. We will continue to support them in every way we can. In light of the active investigation, police have asked us not to share certain details. At present we are able to provide the information below:

Safety on Facebook Live

We recognize that the immediacy of Facebook Live brings unique challenges, and in the past few years we’ve focused on enabling our review team to get to the most important videos faster. We use artificial intelligence to detect and prioritize videos that are likely to contain suicidal or harmful acts, we improved the context we provide reviewers so that they can make the most informed decisions and we built systems to help us quickly contact first responders to get help on the ground. We continue to focus on the tools, technology and policies to keep people safe on Live.

Artificial Intelligence

Many people have asked why artificial intelligence (AI) didn’t detect the video from last week’s attack automatically. AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it’s not perfect.

AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video. This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.

AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year we more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers, and why we encourage people to report content that they find disturbing.

Reporting

During the entire live broadcast, we did not get a single user report. This matters because reports we get while a video is broadcasting live are prioritized for accelerated review. We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground.

Last year, we expanded this acceleration logic to also cover videos that were very recently live, in the past few hours. Given our focus on suicide prevention, to date we applied this acceleration when a recently live video is reported for suicide.

In Friday’s case, the first user report came in 29 minutes after the broadcast began, 12 minutes after the live broadcast ended. In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review.

Circulation of the Video

The video itself received fewer than 200 views when it was live, and was viewed about 4,000 times before being removed from Facebook. During this time, one or more users captured the video and began to circulate it. At least one of these was a user on 8chan, who posted a link to a copy of the video on a file-sharing site and we believe that from there it started circulating more broadly. Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.

This isn’t the first time violent, graphic videos, whether live streamed or not, have gone viral on various online platforms. Similar to those previous instances, we believe the broad circulation was a result of a number of different factors:

  1. There has been coordination by bad actors to distribute copies of the video to as many people as possible through social networks, video sharing sites, file sharing sites and more.
  2. Multiple media channels, including TV news channels and online websites, broadcast the video. We recognize there is a difficult balance to strike in covering a tragedy like this while not providing bad actors additional amplification for their message of hate.
  3. Individuals around the world then re-shared copies they got through many different apps and services, for example filming the broadcasts on TV, capturing videos from websites, filming computer screens with their phones, or just re-sharing a clip they received.

People shared this video for a variety of reasons. Some intended to promote the killer’s actions, others were curious, and others actually intended to highlight and denounce the violence. Distribution was further propelled by broad reporting of the existence of a video, which may have prompted people to seek it out and to then share it further with their friends.

Blocking the Video

Immediately after the attack, we designated this as a terror attack, meaning that any praise, support, or representation violates our Community Standards and is not permitted on Facebook. Given the severe nature of the video, we prohibited its distribution even if shared to raise awareness, or only a segment shared as part of a news report.

In the first 24 hours, we removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.

We’ve been asked why our image and video matching technology, which has been so effective at preventing the spread of propaganda from terrorist organizations, did not catch those additional copies. What challenged our approach was the proliferation of many different variants of the video, driven by the broad and diverse ways in which people shared it:

First, we saw a core community of bad actors working together to continually re-upload edited versions of this video in ways designed to defeat our detection.

Second, a broader set of people distributed the video and unintentionally made it harder to match copies. Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats.

In total, we found and blocked over 800 visually-distinct variants of the video that were circulating. This is different from official terrorist propaganda from organizations such as ISIS – which while distributed to a hard core set of followers, is not rebroadcast by mainstream media organizations and is not re-shared widely by individuals.

We’re learning to better understand techniques which would work for cases like this with many variants of an original video. For example, as part of our efforts we employed audio matching technology to detect videos which had visually changed beyond our systems’ ability to recognize automatically but which had the same soundtrack.

Next Steps

Our greatest priorities right now are to support the New Zealand Police in every way we can, and to continue to understand how our systems and other online platforms were used as part of these events so that we can identify the most effective policy and technical steps. This includes:

What happened in New Zealand was horrific. Our hearts are with the victims, families and communities affected by this horrible attack.

We’ll continue to provide updates as we learn more.