The volume of deepfake videos, a type of media created with an AI-powered Generative Adversarial Network (GAN), shows staggering growth with reputation attacks topping the list, according to a report by Sensity, an Amsterdam-based visual threat intelligence company.
Over 85 thousand harmful deepfake videos, crafted by expert creators, were detected up to December 2020, claims a recently published report ‘The State of Deepfakes 2020’. The number of expert-crafted videos has been doubling every six months since observations started in December 2018.
The statistics refer to certified deepfake videos, which are either used to harm public figures or have the potential to do so. The data on attacks on private individuals was excluded from the report.
According to Giorgio Patrini, CEO and co-founder of Sensity, existing communities of deepfake tech developers and deepfake content creators are expanding, while new communities are popping worldwide.
“Reputation attacks by defamatory, derogatory, and pornographic fake videos still constitute the majority by 93%. The West, and in particular the United States, is still the main target when considering attacks on public figures,” Patrini told CyberNews.
Similar to 2019, last year, only 7% of expert-crafted deepfake videos were made for comedy and entertainment.
Double-digit growth was expected in all countries that made it to the report, with some countries recording the number of people targeted shoot up threefold or more. The US tops the list with well over a thousand targets detected.
Unsuspected individuals, however, are increasingly targeted by defamatory attacks. This trend indicated a change in modus operandi of threat actors as this form of taunting was previously reserved for famous public personas.
Since the technology that helps create deepfakes is data-hungry, even a short deepfake video requires thousands of real pictures. But with advances in AI, anyone can become a target.
Last fall, Sensity published a report about a bot network on the Telegram platform where pictures of women, often taken from their social media accounts, were “stripped” of clothing using artificial intelligence.
Deepfake videos failed to materialize as a threat to democracy in 2020, while other tools and techniques were used massively to spread disinformation,Giorgio Patrini.
According to Patrini, over 100,000 women were targeted by primarily male perpetrators.
Patrini explained that non-English underground communities of developers and creators are located and supported by native speakers of Russian, Korean, Japanese, and Chinese languages.
According to the report, only targets from Argentina, Turkey, and Indonesia were less often subjected to derogatory, defamatory, and pornographic fake videos.
“It appears that creators from these countries are more prone to use deepfakes as an instrument for political commentary, critique, or attack on public personalities,” Patrini explained to CyberNews.
Even though many tech experts expected the US Presidential election of 2020 to witness a noticeable growth in deepfakes, Patrini claims that fears turned out to be unfounded as Sensity didn’t notice an increase in deepfakes related to the US 2020 Presidential election.
He added that deepfakes related to the presidential election were satirical.
“This is despite the predictions shared by many experts in 2019-2020. Deepfake videos failed to materialize as a threat to democracy in 2020, while other tools and techniques were used massively to spread disinformation,” Patrini told CyberNews.
However, this doesn’t mean that threat actors did not employ AI to influence election results as entire networks of puppet social media accounts were created, usually with AI-generated photo profiles, Partini said.
Many feared that deepfakes will be employed to cause chaos as it’s becoming easier and easier to create convincing videos of political figures. Moreover, the prevalence of deepfakes creates what’s called a ‘liars’ dividend, the ability to say that something is a deepfake, even if it isn’t.
More great CyberNews stories:
Subscribe to our newsletter