How AI is changing the way cybersecurity professionals work


After decades of seeing AI technologies as a kind of science fiction, the public release of ChatGPT in late 2022 has introduced radical changes to the computing profession.

With GPT now joined by Google Bard (now Gemini), Microsoft Copilot, and Claude, large language models (LLMs) have come to dominate the market, performing various tasks that were – until recently – limited to humans.

On the cybersecurity front, organizations have rushed to adopt this new technology in different work operations to cut costs and increase efficiency. But how is this new technology impacting the cybersecurity profession?

ADVERTISEMENT

Using AI technologies to execute cybersecurity tasks

The rapid progress of AI and Machine Learning (ML) technologies is evident in the revolution of generative AI. This technology allows people to leverage the massive capabilities of AI and ML to execute various computing tasks.

In the IT security arena, organizations leverage them to enhance threat detection capabilities, summarize log data, and automatically generate compliance reports.

For instance, AI-powered security solutions can analyze network traffic to identify abnormal activities that might indicate a cyberattack. It can process millions of log entries – such as firewalls, intrusion detection systems (IDS), and other networking devices logs – to highlight potential security incidents and generate detailed reports to meet regulatory requirements like General Data Protection Regulation (GDPR) or Health Insurance Portability and Accountability Act (HIPAA).

However, as with every new technology, threat actors can also utilize it for malicious purposes, such as crafting convincing phishing emails, creating sophisticated malware that evades traditional antivirus programs, and automating different cybercrime activities.

AI can generate highly personalized spear-phishing emails or create deepfake voice and video content to be used in various social engineering attacks.

Returning to AI technologies benefits in cybersecurity, let us discuss how organizations now leverage the new technology and how each use case will affect human employee recruitment.

Automate repetitive and routine tasks

ADVERTISEMENT

In cybersecurity, many activities involve routine tasks, such as threat detection and monitoring, keeping an eye on firewalls, IDS, intrusion prevention systems (IPS), and other security tools.

AI-powered security solutions can automate most repetitive tasks executed by cybersecurity professionals. For example, an AI-powered Security Information and Event Management (SIEM) system can analyze various system logs and flag potential threats. This will effectively reduce the need for junior analysts to perform manual system/security log reviews.

Improved threat detection

AI-powered User and Entity Behavior Analytics (UEBA) solutions can track anomalies in user behaviors more effectively than human employees. For instance, these tools can better detect abnormal employee activities when using organizational computing resources, such as tracking access to sensitive documents and applications or taking screenshots of sensitive information on screen.

AI-based UEBA can also be used to monitor employee productivity during work hours and automatically generate productivity reports. In addition to monitoring, these systems can ensure adherence to regulatory compliance requirements, flagging any activity that violates these rules or company usage policies.

Introduce predictive security

Predictive security is a new emerging field that combines AI capabilities in data analytics to enhance cybersecurity defense capabilities in detection, prevention, and response. For instance, instead of relying solely on humans to identify security vulnerabilities in software applications, AI-powered tools can be used to detect possible vulnerabilities in new software before threat actors exploit them.

While predictive security will not entirely eliminate the need for cybersecurity professionals to execute their tasks, it will open new opportunities for cybersecurity professionals who can work collaboratively with data scientists to improve the accuracy and coverage of the ML models powering these AI-based predictive security tools.

Automate penetration testing tasks

Penetration testing is critical in securing computerized systems because it uses the same techniques hackers leverage to infiltrate IT systems. The introduction of AI technology will significantly enhance penetration testing exercises.

ADVERTISEMENT

AI-based penetration testing tools can execute repetitive tasks more efficiently and adapt to different scenarios. For example, while designing a new software solution, AI penetration testing tools can adjust their tests after incorporating new features into the application – a capability that can be very difficult to achieve manually.

Penetration testing powered by ML models can sift through large volumes of datasets containing every possible vulnerability ever discovered. This is far more efficient than human testers and can significantly help identify security vulnerabilities more quickly, resulting in faster resolution of those vulnerabilities.

Automate incidence response

AI-powered solutions excel in certain aspects of incident response compared to humans. For instance, AI-powered incident response tools can:

  • Provide 24/7 monitoring and response while human monitoring ability remains limited.
  • AI tools can process and analyze large volumes of data much faster than humans.
  • Detect threats automatically and prioritize them based on their severity and potential impact, which helps reduce their effects.
  • Gather information from various sources and correlate them to identify the best solution for a particular incident.
  • Execute immediate actions to prevent the incident from expanding to other areas within the target IT environment.
  • Learn from past incidents, which enhances their response capabilities over time. This is the most prominent advantage of AI-powered incident response tools.

AI incident response tools will leave important impacts on relevant cybersecurity professionals' roles, such as:

  • Instead of focusing on routine incident response tasks, cybersecurity professionals can focus on complex issues that AI cannot resolve.
  • Cybersecurity professionals will need to develop skills in understanding and validating AI-generated alerts and recommendations.
  • There will be an increased demand for cybersecurity professionals with AI/ML knowledge to monitor and run these tools. This will give rise to new professions, such as AI security specialists, AI threat hunters, AI bias auditors, and AI model validators.

Why AI can’t fully replace human cybersecurity professionals

Now that we have a fair idea of the main use cases where AI and ML technologies could replace human employees, let’s discuss why AI cannot fully replace the human element.

Context understanding

ADVERTISEMENT

Humans are better at understanding context than machines. For example, a cybersecurity professional in a bank will be more curious about innovative cyber threat attack techniques targeting their industry. They can recognize relevant patterns, such as emerging phishing schemes that exploit recent financial trends or new social engineering tactics leveraging current events.

AI tools might miss such indications if trained on specific datasets and lacking a broader context. Similarly, a healthcare security expert may notice an unusual data access activity that could point to insider threats. At the same time, AI might overlook such activity because it lacks human insight into the industry's specific operational environment.

The human ability to connect disparate pieces of information and draw on real-world experience allows for a more realistic threat detection and risk assessment. Humans can also adapt quickly to evolving situations, whereas AI systems often require retraining to understand new threat types.

Creative thinking

AI-powered tools use ML models trained on massive datasets to derive their results. This makes AI tools excel in detecting repeating threats and finding solutions to reoccurring problems. However, when it comes to solving sophisticated incidents, human creativity in finding appropriate solutions cannot be competed with.

Ethical considerations

Some cybersecurity-related decisions could yield legal and ethical implications. For example, leaving the decision to AI tools to notify law enforcement, victims, or the public following a data breach can have legal consequences, such as breaching privacy laws or damaging the target company's reputation.

Adaptability

Cyber threats are evolving rapidly. Threat actors develop new, sophisticated methods to infiltrate even the most protected networks every day. While humans can adapt quickly to understand new attack vectors, AI tools may struggle and require retraining to respond effectively to threats they have not encountered before.

Emotional understanding

ADVERTISEMENT

Cybersecurity is not merely about tools and techniques. For instance, cybersecurity professionals must interact with top management and stakeholders to describe potential threats and request budgets to mitigate these risks before they materialize. Until now, AI tools lack the emotional intelligence humans excel in, making human expertise crucial in these interactions.

Strategic thinking

Strategic thinking is an essential element when developing a comprehensive cybersecurity plan. As cyber threats continually evolve, the best way to mitigate the ever-increasing number of cyberattacks is through better planning and collaboration. As we have already mentioned, AI excels at discovering vulnerabilities and abnormal network traffic using the datasets it was trained on. However, when it comes to building a strategic plan to secure an organization's digital assets in the long term, human expertise remains crucial.

Accountability

Businesses operating in highly regulated environments, such as the finance and healthcare sectors, require human accountability for protecting customers' sensitive information, such as personally identifiable information (PII), healthcare records, and banking information. AI cannot provide this accountability for regulatory bodies.