© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

AI in cybersecurity – more than a buzzword

AI, frequently harnessed for malicious purposes, has a positive side. In cybersecurity, it can become an essential tool to identify and mitigate risks.

Criminals can turn any legitimate tool against us, with ChatGPT being the latest example. Researchers claim that it can write deployable malware, assist hackers in finding a website’s vulnerabilities, and perform less complicated tasks like crafting a phishing email.

Threat actors might already be toying with ChatGPT to hack into systems with less effort. They have relied on AI tools for years, and many experts argue that the increased adoption of AI and machine learning (ML) tools has led to a growth of cyberattacks in both their scope and sophistication.

How criminals abuse AI

There are many different ways for cybercriminals to exploit legitimate tools. They’ve been employing AI to create bots, draft social engineering strategies, easily mimic users on social media platforms, and weaponize AI frameworks for malicious hacking.

Cybersecurity firm TrendMicro expects criminals to exploit AI in various ways to scale their attacks and evade detection. AI, the company argued two years ago, would be abused as both an attack vector and an attack surface.

Some of the attacks might be scarier and less ordinary than others. “AI could also be used to harm or inflict physical damage on individuals in the future. In fact, AI-powered facial recognition drones carrying a gram of explosives are currently being developed. These drones, which are designed to resemble small birds or insects to look inconspicuous, can be used for micro-targeted or single-person bombings and can be operated via cellular internet,” TrendMicro argued.

But defenders are not sitting still and doing nothing. The cybersecurity industry is also betting big money on AI-powered tools to fight cybercrime.

Human’s best friend

According to an international law firm, Pillsbury Winthrop Shaw Pittman LLP, with a particular focus on technology, AI value in the cybersecurity industry is set to increase exponentially. It accounted for over $10 billion in 2020 and is expected to reach over $46 billion by 2027.

Cyber analyst Cyble argues that AI could be a huge help in analyzing the multitude of cybercrime threats since the internet is 5 million terabytes of data (as per Google, 2021.)

“While AI is poised to outright replace certain tasks and functions traditionally performed by an organization’s workforce, the true benefit of AI only comes through when it is used in concert with expert human analysts who can leverage AI insights to perform their tasks better,” Cyble said.

AI could automate time-consuming tasks, allowing people to focus on more pressing concerns and creative endeavours.

The firm listed some of the use cases for AI in cybersecurity:

  • Rapid curation of threat intelligence from countless research papers, blogs, news stories, etc.
  • Machine learning can help filter, sort, and cut through the noise of constant, voluminous alerts, claims, and news to get to the meat of the matter, greatly reducing a firm’s response time to any given threat.
  • Through the use of adaptive Machine learning algorithms, AI is constantly “learning,” making it even easier to sort through data and pinpoint areas where cybersecurity action/remediation is most required.
  • By analyzing patterns in previously observed attacks, AI can help identify trends, operating patterns, and SOPs that can help mitigate attacks.
  • 24/7 Availability – AI/ML functions can be programmed to run constantly and provide insights immediately, eliminating the need for human intervention or availability during off-hours/weekends/different time zones, etc.
  • A higher volume of data can be processed by AI than by any comparable team of research analysts, allowing them to focus on adding human insights and expertise to the findings from AI.
  • AI can even find unknown threats. When threat actors deploy attacks using multiple vectors and samples, they may be missed by a human analyst. However, AI can immediately identify and flag threats such as these.
  • AI is immune from human error. Due to fatigue, complacency, or execution error, there may be misses from a research analyst; AI is a good solution to ensure all bases are covered.

More from Cybernews:

Social marketplace exposes nearly half a million users

Apple Watch faces import ban in US for infringing med company’s patent

FAA computer outage causing flight disruptions in US, domestic departures paused

Which is more of a threat to the West: AI-written fake news or human trolling?

Government watchdog cracks thousands of passwords at US federal agency in minutes

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked