Deepfake videos are becoming increasingly pervasive, with an estimated 1,000 such videos uploaded to porn sites per month in 2020. Data from the deepfake detection company Sensity suggests that deepfake videos uploaded to sites such as xHamster have been viewed millions of times. The videos often feature celebrities, such as Emma Watson or Natalie Portman, but it’s also increasingly common for videos with a revenge motive to appear.
The owners of the porn sites themselves suggest that they try to make it as easy as possible for people to take such non-consensual content down, but the ease of creating the videos suggests that it’s a problem that isn’t going away any time soon.
For instance, a video published at the start of August 2020 outlining how to create deepfake videos has received over 400,000 views in a matter of weeks, and while not all deepfake videos are as mendacious as those uploaded to porn websites, their use for criminal activities is nonetheless significant. There is also particular concern about their ability to disrupt elections.
Ease of creation
The ability to create deepfake videos has been made significantly easier after a new algorithm was released at the NeurIPS conference last year. The algorithm significantly simplified the process, such that creators only need to have a video of a person’s face to then use that video to animate a photo of another person’s face with just a couple of lines of code.
As the Kapwing tutorial illustrates, creating a deepfake video via this method doesn’t require any elaborate programming skills.
It’s likely to result in a surge in deepfake videos that according to the #deepfake hashtag on TikTok are already generating over 120 million views.
At the moment, the algorithm doesn’t produce a flawless end product, with the fakes of those in the video having a certain wonkiness that has become a part of the surrealist culture of memes produced using it. It’s a quirk that prevents the videos being mistaken for reality, but the algorithm indicates the progress being made, and the likelihood that realistic deepfakes will be within the scope of amateurs as well as highly skilled professionals alike before too long.
Indeed, it’s an intention that companies like Tencent have already publicly declared, with the company suggesting a variety of commercial applications for deepfake videos.
Deepfake elections on the horizon?
As fake news has become more and more prevalent, researchers have gained a growing understanding of who is most susceptible. For instance, a recent study from the University of Delaware found that Republicans were more vulnerable to fake news and conspiracy theories than Democrats, with men also more vulnerable than women.
“During a global pandemic, it’s kind of the perfect storm of uncertainty,” the researchers explain. “And so when we feel a lack of control, uncertainty or powerlessness, we seek out explanations for why the event occurred that’s causing us to feel that way. And what this can do is it can lead us to connect dots that shouldn’t be connected because we’re trying to seek out answers. And sometimes those answers are conspiracy theories.”
If deepfakes become realistic enough that we can no longer distinguish fact from fiction, not only will it be possible for views to be attributed to candidates that they didn’t say, but candidates can also pass off footage of them saying the wrong thing as a fake. For instance, the last election was defined in part by a recording of Hillary Clinton describing many of Donald Trump’s supporters as deplorables.
Should we no longer be able to trust what we see and hear, the impact on our democracy could be profound.
A report published earlier this year from the University of Cambridge reminds us that democracy is in no fit state to take such a beating, with 57.5% of respondents globally saying they’re unhappy with democracy.
“Across the globe, democracy is in a state of malaise,” the researchers say. “We find that dissatisfaction with democracy has risen over time, and is reaching an all-time global high, in particular in developed countries.”
Tipping the scales
Elections around the world have been susceptible to malign interference in recent years, with deepfake videos already linked to an attempted coup in Gabon and efforts to discredit a Malasian cabinet minister. Indeed, even a diplomatic spat between Qatar and Saudi Arabia may have been sparked by fake news.
It’s perhaps no surprise, therefore, that last year the House Intelligence Committee convened a hearing on the threat posed by deepfakes. The committee raised the prospect of a grim, “post-truth” future, with potential nightmare scenarios for upcoming elections.
For it’s part, Facebook recently hosted a Deepfake Detection Challenge, which produced a winning entry that was capable of detecting deepfake videos with an accuracy of 82.56%. Tools such as this have the potential to be both automated and mass-produced, and therefore highly scalable. The project team urged a degree of caution, however, as while all efforts were made to simulate the kind of situations where deepfake videos might be used in real life, this was still not really achieved, and when the technology was tested on previously unseen videos, the accuracy plunged to 65.18%.
With upcoming elections around the globe on the horizon, it’s likely that any defences democracies do have will be wholly inadequate to guard against the threat posed by deepfakes. Quite how influential these videos might prove to be will be born out by time over the coming months. It’s a battleground that has been firmly set.