While dark web dealers peddling AI-altered content of military personnel are a reality, we still haven’t reached the peak of AI-based threats.
Large language models and image-generating tools are slowly becoming the norm in everyday life, with most malicious uses focusing on scams and fraud. For example, according to Michael Price, CTO of digital protection company ZeroFox, AI-based content such as deepfakes has already permeated ad fraud.
In these cases, attackers would place a digital ad featuring a well-known face or a brand. Once victims click on the ad, they’re taken to a website where the face seen in the ad is included in an AI-generated video inviting unsuspecting victims to part with their funds.
“Essentially, the video is pitching something fraudulent. What we've seen are largely cryptocurrency scams or “pump and dump” scams for stocks. We've seen this done a number of times targeting organizations and countries around the world, not just the US, but in other parts of the world, too,” Price told Cybernews.
However, more advanced attackers are taking note of AI's new technological capabilities to craft more targeted campaigns. Price notes that his team noticed instances where dark web-based malicious actors would advertise their capabilities to develop videos with US military personnel saying anything they want.
This type of content could later be used in disinformation campaigns to advance attackers’ goals in a certain information space. Given the tumultuous state of international affairs, these types of videos can be abused in numerous ways, with defenders struggling to find an apt way to combat the rapid spread of misinformation.
“And in this scenario, the content is disseminated on major internet platforms. There's no scam or fraud component to it – that's obvious. And it appears to be designed to influence the viewer of the video,” the ZeroFox CTO explained.
Beware of real-time deepfaking
While there have been a number of cases where malicious actors have managed to trick high-level organization employees into transferring company funds using AI-generated content, these types of attacks are still more of a novelty than an everyday occurrence. However, that doesn’t mean they're not coming.
Price reminded us that every new technology has come with a certain inflection point. Take ransomware, for example. For a long time, it was an outlier threat, something that happened to a very small number of organizations. But as the dark web ecosystem has grown and entry points have gotten so low, we‘ve reached a point where hundreds of thousands of organizations are impacted every year.
“I think in the case of using deepfakes to support attacks, it's not hit that inflection point yet. So far, there's sort of peppering of attacks that certainly are of high interest to the press and security folks,” Price said.
One trend to look out for, Price says, is real-time deepfaking. In these types of attacks, malicious actors can mimic the voices and appearances of company employees, such as CEOs. Once technology allows for flawless video mimicking, it will be much harder for even people with good cybersecurity awareness to differentiate between what‘s real and what‘s not.
“The ability to call somebody in real-time will be dangerous. It’s pretty risky because I think folks‘ minds are not yet trained to validate who they're speaking with. Especially, as long as the validation is automatic: if I can see this person and they look like that person and they sound like that person, then it must be that person,“ Price explained.
Situational awareness
Even though it’s impossible to dissuade malicious actors from learning new attacking techniques, defenders will also tap into AI capabilities to counter them. For example, AI can greatly reduce the time necessary for analysis and interpretation.
For one, LLM-based tools will help people better understand a wider media landscape – be it an underground forum or a lengthy analysis. Coupled with AI-enhanced translation tools and speech-to-text software, users may get a clearer picture of what’s actually happening around them.
“If you had access to radio from police or something, you would be in a better position to translate that to text and identify whether something was happening of relevance to an area that you're concerned about, and then you would have a better capability to take a look at images to understand,” Price said.
“Does it reference something of interest to me? You can perform better automated analysis on media now, based on what's available, than you could one or two years ago.”
Your email address will not be published. Required fields are markedmarked