Using our pulse to spot deepfakes
Deepfake production has progressed significantly in recent years, with researchers able to produce high-quality material with less training material than was previously the case. The growth in the technology is widely expected to have significant implications for media and society, so it's important that our ability to detect deepfakes grows alongside our ability to create them.
New research from Binghamton University suggests the changes in our face as a result of our pulse could play a crucial role. The researchers have been working with computer scientists from Intel to develop a deepfake detection tool, called FakeCatcher, which they believe has a detection rate of around 90%.
The system works by monitoring our skin color, with a particular focus on the subtle changes in color caused by our heartbeat.
It’s a process known as photoplethysmography (helpfully abbreviated as PPG!), which is more commonly used in pulse oximeters that are deployed in regular doctor’s offices, as well as in a wide range of wearable fitness trackers, such as the Apple Watch, which measure your heartbeat as you workout.
Checking the pulse
The technology measures PPG signals from various parts of the face before analyzing the spatial and temporal consistency of this data. The rationale for this method is that in deepfake images and videos, there is no real consistency for heartbeats because there is no pulse information. This contrasts with authentic videos, in which the blood flow in, say, our right cheek and our left cheek, are the same.
The team developed a highly sophisticated physiological capture laboratory, complete with 18 cameras and infrared functionality. Each volunteer is equipped with a device to monitor both their breathing and heart rate. The lab is capable of capturing so much data in a simple, 30-minute session, that roughly 12 hours of computer processing time is required to crunch through it.
The team is able to capture images not just in 2D and 3D, but also via a range of thermal cameras and physiology sensors.
The idea is to use the physiological data to generate a signature that can be compared with previous data to help spot fakes.
It's important to note that the fakes generated in the lab are generally of a much higher standard than those found in the wild, but the team believes the high benchmarks they set themselves make the tool that much more potent.
The lab also generates their "fake" videos by using composites that have been generated via real people. By contrast, deepfakes use data taken from other people to generate their videos.
A hive of activity
The original findings from the FakeCatcher project have spawned a wave of activity, with nearly 30 other researchers from around the world using the technology for their own analyses.
This hive of activity has caused concern that researchers might be helping cybercriminals to understand the methods of detection being developed, and therefore how to circumvent them.
It's a concern the team doesn’t share as the science behind it represents a significantly weighty barrier to prevent the uninitiated from copying things. It's not a case of taking something "off the shelf" and using it.
Intel's involvement in the project has been key, as the company has an intense interest in both augmented/virtual reality and volumetric capture. As such, Intel Studios have arguably the biggest volumetric capture facility in the world, with an array of 100 cameras within a 10,000 square-foot dome. The stage is capable of capturing around 30 people simultaneously.
The company intends to use this facility for a wide range of volumetric-capture use cases, especially in areas such as sport and entertainment, where it would be possible for the audience to immerse themselves in whatever they're watching.
In the cybersecurity domain, however, the team hopes to continue improving and refining the FakeCatcher technology by drilling deeper into the data to better understand how deepfakes are made. As well as improving the battle against deepfakes, the team believes that this could also benefit adjacent domains, such as telemedicine. It's still at a relatively early stage in the battle against deepfakes, but the project is a great example of the work being done to help us retain confidence in what we see.