For anyone holding a grudge, it’s never been easier to stage compromising or even incriminating pictures to bring down a rival, destroy reputations, and kill military morale.
A report from the University College of London (UCL) listed deepfakes as the most severe AI crime threat to date. Apart from the threats of spoofing, experts point to fake audio and video content with extortion applications.
In this article, the Cybernews team aims to explore what a deepfake is and how it can completely distort the reality we live in.
How deepfakes are made
Deepfakes are videos in which one’s face has been replaced by a computer-generated face that closely resembles another person. The term was born on Reddit, where members of a group called “Deepfake'' used AI to put celebrities' faces on porn actors*.
Take a look at this Tom Cruise video. Looks close to reality? Yet, it’s a deepfake.
The programs that create deepfakes usually use two different AIs at the same time. The first one will scan images, videos, and audio of the victim and then generate a tampered image or video. The second AI will compare them to real images and report any difference. Finally, the first AI will receive this feedback and repeat the process several times until the second one can’t tell a fake from the real thing anymore.
This technology has evolved to the point that it is accessible to nearly everyone. And it’s likely to get easier as computing power improves. But the power of deepfakes is even more sinister than might originally seem.
Channel 4, in England, made a statement with the video known as “Queen Elizabeth’s alternative Christmas Speech” in 2020. The video began with the Queen addressing the British people as she does every year.
Things got twisted when she began to make jokes about public crown scandals from that year. Only when she stood up to dance, people started to realize it wasn’t really her. The video raised awareness, as well as some eyebrows, and put on the table the conversation about the seriousness and even maybe the dangers of deepfake technology.
Meant to destroy reputations and undermine morale
And that’s just the beginning. Deepfakes can be used in warfare to raise or kill military morale.
In March, Meta removed a video posted on Facebook of president Zelensky encouraging Ukrainian people to lay down their arms after realizing it was a deepfake. The video went viral on the platform, and was shown on TV24’s hacked website as breaking news. The Ukrainian president had to debunk the scam posting a video of him, on his own Telegram channel, supporting Ukrainian troops.
Not everything is merely misinformation, though. Deepfakes can be used to target pretty much everyone, from celebrities and political figures to regular citizens. It was just two years ago that Sensity, an intelligence company specialized in visual threats, recorded 85 thousand deepfake videos. These meant to destroy reputations.
These videos consisted of creating non-consensual pornography or even inciting fear and violence through incendiary speeches from targeted politicians. These incidents are getting worse and are involving digital abuse, such as: harassment, blackmailing, and public shaming.
Of course, the ability of putting someone’s face or voice onto another person presents a huge opportunity to a bunch of unscrupulous people. In 2019, The Wall Street Journal reported that the CEO of a UK-based company believed he was on the phone with his boss when he instantly transferred €220,000 following his order only to find out he was actually speaking to scammers who imitated his employer's voice using AI technology.
Many experts state that deepfakes are the biggest cybersecurity threat. Recent tech developments have advanced towards biometrical technologies, from cellphones locks to bank accounts and passports. These types of security that rely heavily on face recognition seem to be at risk with deepfakes evolving at such a fast pace.
Big tech companies are aware of this harmful potential and are coming up with powerful tools including cutting-edge technology to detect this kind of videos.
Wondering how to recognize a deepfake video? Some experts say it’s the blinking that gives fakes away. It’s hard for AI to closely recreate human behaviour. As a result, deepfakes blink much less frequently than humans.
Maybe one of the most talked about dangers of deepfake is related to politics. In the 2020 US presidential elections, many feared that deepfakes could influence the voting results. Fortunately, these fears didn’t materialize, with the only deepfakes related to the elections being satirical. However, the US government established a Deepfake Task Force, which aims to safeguard the population from scammers who want to defraud America using artificial intelligence.
There are already fears that if people don’t get better at differentiating fakes from authentic videos, it could cost them prison sentences, fake accusations, or even false “fakes.” Just to give you an example, three years ago in Malaysia, a politician was caught on several videos proving his sexual misconduct to which he sustained were manipulated, when they actually weren’t.
Good or bad technology?
Not everything is grim, though. There have already been positive uses of deepfake technology. For example, it is being used in training videos for companies, and there are TV shows like “Sassy Justice” that create characters’ faces using AI. There are even cases like the “Welcome to Chechnya'' documentary, in which the creators used deepfake technology to protect the identity of the LGBTQ+ community they were portraying in Southern Russia.
The technology behind deepfakes is very new and maybe we shouldn’t judge it just yet. Like any technology, it can bring positive innovations regardless of the fact that there is a dark side to it. One thing I’m certainly worried about is what is called “the liars dividend.” According to the concept, deepfakes might allow criminals to walk free. This is already happening, and it’s hindering legal processes along the way. Experts suggest that in the coming years, we should expect more and more of this to happen.
More from Cybernews:
Subscribe to our newsletter