While even the most casual observer of the media over the past few years could be mistaken for thinking AI is everywhere, the reality is that apart from a few leading-edge companies, the deployment has been much slower, and in much narrower domains than the popular perception would suggest.
It would be a mistake, therefore, to believe that our organizations are masters of the AI universe. So, it’s perhaps to be expected that those who do grasp the technology might include those who wish to wield it for nefarious purposes. A recent study, undertaken by Forrester Consulting on behalf of Darktrace, revealed that around half of executives are worried about the use of AI to attack their digital and cyber-physical systems.
“Artificial intelligence (AI) is no longer a tool only for the “good guys”; malicious actors now use it as a force multiplier as well,” the report warns. “This new era of offensive AI leverages various forms of machine learning to supercharge cyberattacks, resulting in unpredictable, contextualized, speedier, and stealthier assaults that can cripple unprotected organizations.”
A complex environment
The respondents bemoan the growing complexity of their working environment, with an expanded infrastructure leading to a significant growth in security challenges that are only compounded by the speed and sophistication of attacks.
It’s increasingly common for organizations to have a multi-faceted digital infrastructure that contains hybrid, multi-cloud, and IoT environments. This provides a multitude of operational benefits, but it also provides an ever growing expanse to protect and secure from attacks. Indeed, 83% of executives revealed that their digital infrastructure had expanded in such a way as to make developing a unified security strategy significantly harder.
Nowhere is this complexity more evident than in the growth in AI-enabled attacks. The report highlights how machines are already commonly attacking machines, but that it’s increasingly common for machines to successfully attack humans, and this new approach is something that businesses aren’t ready for.
“Trust plays a big part in this, and Forrester estimates that AI-enabled deepfakes will cost businesses a quarter of a billion dollars in losses in 2020,” the report says. “It’s no surprise that 86% of cybersecurity decision makers are concerned with threat actors leveraging AI to supercharge attacks and a further 88% believe it’s inevitable for AI-driven attacks to go mainstream.”
A losing battle
In military circles, a common warning is to avoid ‘fighting yesterday’s battles’, but that is largely what many cybersecurity teams are doing today. Traditional defenses are reliant upon prior assumptions, and they are being outgunned by AI-powered attacks. There are signs that things are changing, however, or at least, that there is a need for change. There’s a growing appreciation for the need for speed, both in identifying attacks and then responding to them. Despite this, less than 25% of businesses said they could recover from an attack in less than 3 hours.
Organizations need to develop the capability to detect, interpret, and respond to attacks as quickly and nimbly as the attackers are themselves acting. As the breadth of infrastructure grows, the number of vulnerabilities grows alongside it, and so organizations have to adopt an agile approach to keeping their infrastructure safe.
As well as the lightning speed of AI-driven attacks, executives also worried about the nature of attacks, with two thirds expecting offensive AI to conduct attacks that no human could conceive of. They expect these attacks to be unpredictable and stealthy, therefore evading more traditional security measures that reference historical attacks. Many expect to utilize AI in a defensive capacity, with a machine learning approach used to upgrade defences far faster than humans could manage.
“Organizations need to bite the bullet and be honest about the fact that AI is just another digital capability in the ever-evolving cyber realm, and just as with every other innovation in this space, AI too will be manipulated for nefarious purposes,” the report says. “The real issue with this is that because AI moves faster and better than current legacy defenses, the “evil AI” will win in most instances.”
The use of AI to both protect and attack digital systems is an inevitability of the modern age, but the lack of strategic focus on cybersecurity renders many organizations more vulnerable than they need to be. In a digital arms race, AI tooling and capabilities are no longer nice to haves, but should be fundamental parts of the security toolkit. While there is a growing awareness of this requirement, it remains to be seen how many cybersecurity managers are given the tools and resources they need to maintain the security of their systems.