The use of solutions powered by Artificial Intelligence (AI) and Machine Learning (ML) is rapidly increasing in the cybersecurity industry. According to a recent report, AI in the cybersecurity market is expected to grow at an annual rate of 23.6% and reach $46.3 billion by 2027.
Does this mean AI will radically revolutionize the cybersecurity industry to the point of it being unrecognizable in the near future? And will cyber threats evolve even faster than AI-powered security solutions?
The promise of AI in cybersecurity
AI and ML algorithms could power security solutions that analyse massive amounts of data collected from previous cyberattacks and use them to identify potential new threats, or simulate the behavior of attackers when approaching a target network.
Security solutions can also implement AI algorithms to efficiently analyse attack patterns and identify any anomalies or irregularities in the network to defend. Such AI-powered solutions can even identify zero-day vulnerabilities in our infrastructure.
One of the most interesting uses of AI in cybersecurity is the implementation of predictive systems that could allow organizations to identify emerging threats and neutralize them.
As malware continues to evolve, signature-based malware detection systems fail to detect most threats. For this reason, a new-generation of anti-malware solutions implements both static analysis with behavior analysis to increase detection capabilities.
Some endpoint solutions implement ML algorithms for the classification of malware as part of static analysis. These systems are able to analyse malware features by comparing them with those stored in huge archives of malware. ML-based solutions can build their experience by analysing the features of massive databases of malware samples.
Cybersecurity researcher Ryan Permeh explained in an online interview conducted by CSOOnline that “historically, an AV researcher might see 10,000 viruses in a career. Today, there are over 700,000 per day. (and let me add that it was 2017, the situation today is getting worse!)”
AI-based solutions could also help implement efficient anti-spam systems, surpassing limitations of old techniques such as simple word filtering, IP blacklists, and simple content filtering. Content-based filtering (CBF) techniques could benefit from ML algorithms when creating automatic filtering rules and classifying email messages. A CBF technique that uses ML, implements a grammar check of the content and analyses words, their occurrence and distribution, and phrases in the content of emails to generate a set of rules to filter inbound email spams.
AI could also be used to power solutions that help organizations comply with GDPR by tracking data flows within these organizations.
Apart from that, AI can alleviate the skill shortage that the cybersecurity industry is currently facing. AI-based solutions can help existing cybersecurity teams amplify their analysis and detection capabilities and compensate for the lack of security staff.
When highlighting the importance of the role of AI in cybersecurity, however, we cannot consider these technologies as a full replacement for human operations and analysis. AI-based security tools could be affected by programming issues that could result in incorrect and unexpected behavior or analysis. For this reason, validation by human operators is important.
An arms race against cybercriminals
Another problem related to the use of AI-based solutions is that threat actors could understand how these systems build their “experience” and trick them into having a wrong perception of the phenomena they monitor.
During DEFCON 2017, security expert Hyrum Anderson has demonstrated how AI could be used by threat actors to carry out an attack.
The team demonstrated an intelligent application that can re-engineer a malware program and make it undetectable to the next generation of antivirus solutions. The researchers successfully circumvented the protective layers of the AI-powered antivirus with its AI-powered malware 16% of the time. The study was conducted to show that even AI can have blind spots that could be used to compromise systems.
Law enforcement agencies state that threat actors are beginning to use AI-based tools in their attacks.
According to a report by Europol’s European Cybercrime Centre, AI is one of the emerging technologies that could dramatically improve the efficiency of a broad range of cyberattacks.
“Criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims, and creating new, innovative criminal business models — all the while reducing their chances of being caught.reads the report published by Europol.
"Consequently, as “AI-as-a-Service” becomes more widespread, it will also lower the barrier to entry by reducing the skills and technical expertise required to facilitate attacks. In short, this further exacerbates the potential for AI to be abused by criminals and for it to become a driver of future crimes,” the report continues.
The report concludes that threat actors will use AI both as an attack vector and an attack surface. The report urges the adoption of new detection solutions to mitigate the risk of AI-based attacks, such as disinformation campaigns and extortion.
The Europol report states that AI and ML solutions are already part of the threat actors’ arsenal and will rapidly evolve. According to the report, AI could be used to support:
- Convincing social engineering attacks at scale
- Document-scraping malware to make attacks more efficient
- Evasion of image recognition and voice biometrics
- Ransomware attacks (via intelligent targeting and evasion)
- Data pollution (by identifying blind spots in detection rules)
A double-edged sword
At the time of writing, AI-based cybersecurity solutions continue to evolve as researchers and security firms are developing new tools to support human security teams in the fight against cyber threats. These solutions have to support cybersecurity operators, but they cannot fully replace them.
AI can greatly improve the overall cybersecurity of our society, but we cannot forget that it could be abused by threat actors to carry out new waves of sophisticated cyberattacks. It’s a double-edged sword that could become dramatically more dangerous if it gets into the wrong hands.
The introduction of “AI-as-a-Service” in the threat landscape will be a game changer. It will also lower the barrier of entry into the cybercrime ecosystem.
To mitigate the exposure to this new generation of threats, we need a hybrid approach based on AI and human competences that will allow us to predict the evolution of threats and to detect them in the early stages of malicious campaigns.
I have no doubt: AI and ML-based solutions will be the fulcrum of cybersecurity of the future.