The use of artificial intelligence (AI) in cyberattacks is quite limited, but the situation might change, and soon, with intrusions becoming much more advanced than current incidents, a new report warns.
The report, co-created by WithSecure, a Helsinki-headquartered cybersecurity and privacy company, the Finnish Transport and Communications Agency, and the Finnish National Emergency Supply Agency, analyzes current trends and developments in AI, cyberattacks, and areas where the two overlap.
According to its authors, it is safe to say today that cyberattacks that use AI are currently very rare and limited to social engineering applications. They are also used in ways that aren’t directly observable by researchers and analysts.
In other words, most current AI fields do not represent anything close to human-level intelligence and would not be able to automatically craft or launch cyberattacks.
However, in the next five years, attackers will likely develop AI capable of autonomously finding vulnerabilities, planning, and executing attack campaigns, using stealth to evade defenses, and collecting/mining information from compromised systems or open-source intelligence.
“Although AI-generated content has been used for social engineering purposes, AI techniques designed to direct campaigns, perform attack steps, or control malware logic have still not been observed in the wild. Those techniques will be first developed by well-resourced, highly-skilled adversaries, such as nation-state groups,” said WithSecure Intelligence Researcher Andy Patel.
“After new AI techniques are developed by sophisticated adversaries, some will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape.”
Current defenses can address some of the challenges posed by attackers’ use of AI, but the report notes that others require defenders to adapt and evolve.
The report says new techniques are needed to counter AI-based phishing that utilizes synthesized content, spoofing biometric authentication systems and other capabilities on the horizon.
That’s because “AI-enabled attacks can be run faster, target more victims and find more attack vectors than conventional attacks because of the nature of intelligent automation and the fact that they replace typically manual tasks,” the report says.
AI-enabled cyberattacks will probably be very effective in the field of impersonation, a mechanism most often used in phishing and vishing (voice phishing) attacks.
“Deepfake-based impersonation is an example of new capability brought by AI for social engineering attacks. No prior technology enabled to convincingly mimic the voice, gestures and image of a target human in a manner that would deceive victims,” say the authors of the report, who predict that AI-enabled impersonations will be taken to another level.
Many experts state that deepfakes are the biggest cybersecurity threat. Recent tech developments have advanced towards biometrical technologies, from cellphones locks to bank accounts and passports. These types of security that rely heavily on face recognition seem to be at risk with deepfakes evolving at such a fast pace.
More from Cybernews:
Subscribe to our newsletter