Notes from Black Hat USA 2024: attacks on AI could soon become deadly


While threat actors utilize large language models (LLMs) and other forms of artificial intelligence, attacking models themselves could soon lead to tragedies.

A staggering 98% of IT insiders consider various AI models crucial to business success, a recent AI Threat Landscape Report from cybersecurity firm HiddenLayer discovered.

Financially or otherwise motivated attackers are fully aware of businesses depending on AI and are actively developing ways to exploit it, Chloé Messdaghi, the head of threat intelligence at HiddenLayer, told Cybernews.

ADVERTISEMENT

“We’re playing a little game of catch-up,” Messdaghi explained at the Black Hat conference in Las Vegas.

While the majority of companies employ AI to manage their daily activities, not all Chief Information Security Officers (CISOs) are aware of it, leaving a gray area in organizations’ security posture.

For example, Messdaghi spoke of a case in which an organization’s chief of cybersecurity was unaware that the company employs nearly two thousand AI model variations.

Meanwhile, attackers have crafted numerous ways to leverage AI for malicious intent. For one, malicious actors can target AI algorithms with data poisoning, model evasion, or model theft attacks, with motivations ranging from intellectual property theft to hindering competitor advancements.

Even though generative AI has only been at the forefront for a couple of years, there’s no shortage of examples of companies leveraging AI to outsmart competitors. For instance, TikTok’s owner, ByteDance, utilized ChatGPT’s application programming interface (API) to develop its LLM, codenamed Project Seed.

Meanwhile, malignant actors, such as financially motivated cybercriminals, may also target generative AI filters with prompt injection or code injection attacks or even subvert AI artifacts in supply chain attacks via code execution, malware delivery, and lateral movement. It’s not difficult to imagine how dangerous it would be to disrupt an AI mode responsible for guiding a self-driving vehicle.

However, according to Messdaghi, healthcare, military, and finance organizations stand to lose most if their AI models are turned against their masters. For example, intentionally biased or corrupted decision-making in loan approvals could have severe economic and societal consequences.

Worse, a compromised AI model with a focus on diagnoses or treatment could easily misdiagnose a patient, potentially leading to loss of health or even death.

ADVERTISEMENT

Similarly, morbid results could come from attackers’ gaining access to military-grade AI-powered systems used for drone systems.

“At least in theory, there could be potential cases where someone could access someone’s model, like one used to guide drones. And that could be pretty scary. Even lethal,” Messdaghi explained.

With more companies utilizing the benefits of AI models, Messdaghi predicts a significant increase in adversarial attacks against AI. Attackers are hardly willing to pass upon the opportunity to hit companies via a vector, and few of them are aware that it is exploitable.

Meanwhile, businesses that want to protect their clients and end users will have to adapt to the evolving threat landscape. Messdaghi claims the first step is to become aware of AI-based exposure, actively implement red team training, scan for discrepancies in AI output, and improve communications between data scientists, developers, and security teams.