By Antonia Woodford, Product Manager
Every day, our team fights the spread of false news through a combination of technology and assessments from independent third-party fact-checkers. With every false story that surfaces, we learn a bit more about how misinformation takes shape online and, hopefully, how we can detect it earlier. In this new series, we’ll look at some pieces of false news that recently circulated on Facebook — both those we’ve caught and some we missed.
What we saw
This summer, a video featuring CCTV footage was re-shared by multiple accounts across several social networks. In the video, a man wearing a white robe and a shemagh, or head scarf, spits in the face of a blonde woman in what appears to be a hospital reception station. The accompanying caption reads, “Man from Saudi spits in the face of the poor receptionist at a Hospital in London then attacks other staff.”
Was it true?
While the video is real, the AFP reports, the incident occurred in a veterinary hospital in Kuwait in 2017 and was being recirculated this summer with a falsified caption.
What to know
One of the primary types of video- and photo-based misinformation involves old images or videos paired with captions or commentary that misrepresent their context. These posts are often used to fuel xenophobic sentiments and are often targeted at migrants and refugees, as the International Fact-Checking Network — the association that certifies the third-party fact-checkers we partner with — has explained. On Facebook, we’ve seen years-old images of violent acts, protests and war zones reposted and used to inflame current racial or ethnic tensions.
How we found it
There are two primary ways we find stories that are likely to be false: either we use machine learning to detect potentially false stories on Facebook, or else they’re identified by our third-party fact-checkers themselves. In this case, the AFP found the out-of-context video.
Once a potentially false story has been found — regardless of how it was identified — fact-checkers review the claims in the story, rate their accuracy and provide an explanation as to how they arrived at their rating. The AFP investigated this video and its caption and submitted a “false” rating and explainer article, which led us to reduce its distribution in News Feed.
What we saw
Following the September 6 stabbing in Juiz de Fora, Brazil, of Representative Jair Bolsonaro, a candidate in Brazil’s presidential election, a photo circulated on Facebook of a man next to Senator Gleisi Hoffmann. The caption claimed that the man in the photo was Bolsonaro’s attacker.
Was it true?
Brazilian fact-checker Aos Fatos reviewed the photo and found that not only was the man standing next to Hoffman not Bolsonaro’s attacker, the photo was taken at an event in a completely different city, Curitiba.
What to know
Violent events like the attack on Bolsonaro can lead to a wave of misinformation about perpetrators, with fabricated posts making false claims about an assailant’s identity and ideological motivations. Similar internet memes have circulated in the US following events like the Las Vegas shooting in 2017 and the Stoneman Douglas High School shooting in 2018.
How we found it
Our machine learning model identified this photo as being potentially false, and Brazilian fact-checker Aos Fatos reviewed the photo. They determined that the man pictured was not the attacker and that the event depicted was not in Juiz de Fora.
Based on Aos Fatos’ “false” rating, we demoted the image in News Feed. We were also able to use photo detection technology to identify and demote thousands of identical photos that had been natively uploaded to Facebook. Using machine learning to find duplicates of debunked stories is an important technique for both photo-based and article-based misinformation. Because so much content is posted to Facebook every day, automation helps make the duplication detection process much more efficient, allowing us to find more instances of misinformation, faster.
What we saw
This story, published by the website World Facts FTW, claimed that NASA was looking to compensate volunteers up to $100,000 to participate in 60-day “bed rest studies.” The headline certainly seemed enticing — the post racked up millions of views on Facebook.
Was it true?
US-based fact-checker Politifact investigated this story and while they found that NASA has paid people to stay in bed for long periods of time, the headline of this particular story was misleading. The photos in the World Facts FTW article came from a 2015 Vice article about a NASA medical research study for which the author stayed in bed for 70 days but was only paid $18,000, not $100,000. (Politifact used a reverse image search to find the Vice article.) Politifact couldn’t get verification for the $100,000 claim from either NASA and World Facts FTW, so they rated the article’s central claim as false.
What to know
We’re getting better at detecting and enforcing against false news, even as perpetrators’ tactics continue to evolve. And while we caught and reduced the distribution of many pieces of misinformation on Facebook this summer, there are still some we miss. This can happen when:
- We fail to identify the misinformation at all
- We identify a piece of content as misinformation, but after it’s already gone viral
- We identify a piece of misinformation early, but it goes viral in the time it takes for fact-checkers to research it and provide a veracity rating
In this particular case, we were able to identify this older article that had been circulating on Facebook for months, using an improved similarity detection process we’ve implemented. It took us too long to enforce against this piece, and we continue to develop new technology to catch these kinds of stories in the future, before they go viral.
How we found it
This article was originally posted in September 2017. In July 2018, US-based fact-checker Politifact investigated the story and, as described above, determined the central claim was false.
Our similarity detection process matched the “false” verdict from the investigation article on Politifact’s website to the instance of the World Fact FTW article that was circulating on Facebook. Based on this potential match, our system enqueued the version of the story that was circulating on Facebook to to our network of fact-checkers and The Weekly Standard reviewed it and also assigned it a “false” rating. Based on this rating, we demoted all Facebook posts linking to the article.