The web is full of algorithmically created content, made to trick our eyes into believing it’s true. There are numerous fake face generators sometimes used to populate fake social media accounts. But now there is a way to accurately tell whether someone behind the black mirror is real.
Researchers at Sensity, an Amsterdam-based visual threat intelligence company, released an online tool designed to spot fake human faces in pictures and videos. With a high degree of confidence, the tool’s algorithms can spot whether the content was manipulated using general adversarial networks (GAN). These are often employed to craft various deepfakes that freely circulate the web.
The technology that allows detecting forgeries is a mix of deep learning and visual forensics techniques, Giorgio Patrini, CEO and co-founder of Sensity, explained to CyberNews. Engineers trained deepfake detectors using hundreds of thousands of deepfake videos and GAN-generated images.
Patrini’s team obtained some of the research materials by scouting the internet. The engineers crafted other training materials to provide the algorithms that fight other algorithms with more formidable challenges.
Patrini said that for now, the detector is capable only of detecting whether a face was faked, but as the detector gets better, it will be able to distinguish whether someone faked an object in a video or a picture. A neat feature of the tool is that it can say which GAN models were used to create a fake face.
We tested the tool ourselves, using photos created by the AI’s of several different fake face generators. In every case, the detection tool worked as designed, recognizing fake faces with 99.9% confidence. I used several of my photos as a control sample, and the tool correctly recognized that my face is, in fact, real.
Results for face swaps were not as straightforward. For example, we used an excerpt from a YouTube show, “Sassy Justice,” where South Park creators used Mark Zuckerberg’s face to imitate a salesman. The tool correctly recognized that the face is not GAN-generated. A confidence level for a face swap was at 64.7%.
Patrini explained that the detector could reason about its own accuracy and uncertainty. So, if there are clear signs of manipulation, visual artifacts of a deepfake generator, the tool will have a confidence level of over 90%.
“Otherwise, if confidence is low, some signals of manipulation are found, but they are inconclusive to classify the media as a deepfake. We are continuously monitoring performance and working on R&D to obtain higher accuracy and confidence as shown to our users,” said Patrini.
He explained that Sensity had technical users integrating their technology, but since deepfakes are increasingly becoming a global problem, there are more everyday applications of a deepfake detector.
There have been several notable instances where bad actors used GAN-generated images. For example, last July, Reuters dug out a fake journalist named Oliver Taylor, whose avatar used an AI-created face. The persona, that was used to attack activists, managed to receive attention from major Israeli news outlets.
In another instance, The Financial Times reported that GAN-generated faces were used in campaigns linked to China pushing pro-Being points and Russia (who used it to create fictional editors).
Back in August, UCL published a report ranking deepfakes as the most severe AI crime threat to date. Apart from the apparent danger of shaming and fake revenge porn, experts point to fake audio and video content with extortion applications.
Your email address will not be published. Required fields are markedmarked