Facebook on Thursday announced the expansion of their fact-checking army to send “photos and videos to all of our 27 partners in 17 countries around the world,” after using an artificial intelligence which will use “various engagement signals, including feedback from people on Facebook, to identify potentially false content.”
The company says that because people share millions of photos and videos on Facebook each day, it creates an “easy opportunity for manipulation by bad actors.”
“The same hoax can travel across different content types, so it’s important to build defenses against misinformation across articles, as well as photos and videos,” said Facebook.
Facebook categorizes photo / video “misinformation” into three categories: “(1) Manipulated or Fabricated, (2) Out of Context, and (3) Text or Audio Claim. These are the kinds of false photos and videos that we see on Facebook and hope to further reduce with the expansion of photo and video fact-checking.”
So, if Facebook users report a picture or video suspected of being Fake News, Facebook AI will use machine learning to analyze said reports and then filter them to humans to make the final decision on what’s real and what’s fake.
How does this work?
Similar to our work for articles, we have built a machine learning model that uses various engagement signals, including feedback from people on Facebook, to identify potentially false content. We then send those photos and videos to fact-checkers for their review, or fact-checkers can surface content on their own. Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken. Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies. –Facebook
As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model. We are also leveraging other technologies to better recognize false or misleading content. For example, we use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles. We are also working on new ways to detect if a photo or video has been manipulated. These technologies will help us identify more potentially deceptive photos and videos to send to fact-checkers for manual review. Learn more about how we approach this work in an interview with Tessa Lyons, Product Manager on News Feed. –Facebook
“The same false claim can appear as an article headline, as text over a photo or as audio in the background of a video,” Facebook product manager Tessa Lyons said in the statement. “In order to fight misinformation, we have to be able to fact-check it across all of these different content types.”
4chan is going to have a lot of fun with this…