Generative AI will surpass $100 billion by 2030, underscoring fake media as one of the most prevalent threats to organizations.
Due to advances in computational power and deep learning, creation of fake multimedia is essentially plug-and-play, the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) warned in a cybersecurity bulletin.
It means that users don’t really need much time or many resources to produce a fake – it can be done by someone with little to no experience in just a fraction of the time.
Synthetic media is instrumental in spreading misinformation and propaganda, and enhancing social engineering techniques.
“Many organizations are attractive targets for advanced actors and criminals interested in executive impersonation, financial fraud, and illegitimate access to internal communications and operations,” the US agencies said.
One of the terms that experts use to describe the problem is “shallow” or “cheap” fakes. This refers to techniques that don’t require machine learning. A cheap fake could be just a slowed-down video with a couple of repeat frames to make it look like a person is intoxicated.
Another, better known, term is deepfake, and it refers to a synthetic media created or edited with the help of AI.
For example, several Russian TV channels and radio stations were hacked to show a video of Vladimir Putin allegedly enacting martial law. The video was a deepfake.
Exploitation of the technology has already significantly harmed organizations. For example, a deepfake audio was used in 2019 to steal nearly $250,000 from a UK company.
“Malicious actors may use deepfakes, employing manipulated audio and video, to try to impersonate an organization’s executive officers and other high-ranking personnel,” the agencies said.
The bad deeds actors can do, of course, doesn’t end there. For instance, they could easily damage a company’s reputation by releasing controversial deepfake footage online. Quite usually, threat actors attempt to impersonate high-profile personas like Ukrainian President Volodymyr Zelensky to spread misinformation.
“This technique can have high impact, especially for international brands where stock prices and overall reputation may be vulnerable to disinformation campaigns. Considering the high impact, this type of deepfake is a significant concern for many CEOs and government leaders,” said the bulletin.
Last year, Cybernews ran a story of how hackers deepfaked a top Binance executive to scam crypto projects. Because the technology is within reach for everybody these days, similar scams are becoming more common.
“Dynamic trends in technology development associated with the creation of synthetic media will continue to drive down the cost and technical barriers in using this technology for malicious purposes,” the agencies added.
By 2030, the generative AI market is expected to exceed $100 billion, with a 35% yearly growth rate.
Of course, the technology will become more affordable for defenders, too. For recommendations on how to resist deepfakes, read the report in full here.
Your email address will not be published. Required fields are markedmarked