While troves of organizations are eager for competitive edge-snapping artificial intelligence, few understand that they’re jumping in uncharted waters with both feet up front.
With OpenAI‘s flagship chatbot, ChatGPT, inching closer to its second birthday, virtually everyone who‘s anyone is chucking millions into AI adoption. Attackers aren’t far behind either, as they are adopting large language models (LLMs) to devise attacks and find ways to target AI itself.
What cybersecurity pros have noticed first is how fast AI has leveled the playing field for malicious actors. Crafting malware and tools that used to take a lot of skill can now be made by people who barely know how to code, Chuck Herrin, Field CISO at cybersecurity firm F5, told Cybernews.
He recalled a recent case in Texas. After researchers investigated a compromised server, a sophisticated exploit chain was discovered. However, uncovered logs indicate that attackers didn’t know what commands to use to activate the remote shell.
Even though there‘s little novelty in cyberthugs renting malicious tools made by professionals, there are indicators that other forces are at play. Traditionally, in areas such as bot management, 10% of attackers made 90% of damage. Now, however, attackers are getting more and more efficient.
“The attack velocity and the complexity of the attacks just seem to be growing disproportionately, pointing to the offensive side making use of the new tools on hand,” Herrin said.
Unprecedented challenges ahead
Even though the race for artificial general intelligence (AGI) stirs the imaginations of tech evangelists worldwide, the exact opposite, Agentic AI, will come to dominate the tech landscape in the near future. Agentic AI systems act without human oversight and are focused on autonomously completing tasks and solving problems.
Real-world applications of Agentic AI can include self-driving vehicles and AI assistants that manage schedules and make appointments. Combined with other AI systems, such as large language models (LLMs), Agentic AI could, in theory, automate complex processes in large organizations.
Herrin thinks that many companies will start thinking really hard about the benefits of outsourcing “expensive people” from the payroll, swapping them for custom-built or rented AI agents that would allow them to slash costs on human resource (HR) management tasks such as onboarding, offboarding, payroll management, and other similar tasks.
“Imagine that a company automates a lot of its HR functions. And then, an attacker is able to take control of these AI agents. Not only would they have data that they could put up for ransom, but they would also control the entire HR department, and the people who are the company’s fallback would be gone,” Herrin theorized.
Given how fast humanity has adopted the likes of ChatGPT, Agentic AI use acceleration will likely start to be very visible in 2025 and 2026, opening up unprecedented challenges for defenders, who are usually behind the game compared to attackers.
“I don't think that's going to be the majority of cases, but I definitely think that there are going to be companies that just jump in with both feet without understanding the risks and the resiliency that they're lacking now,” Herrin said.
Starting with the basics
The key problem with any type of AI is the same as with every other new type of tech: the pace of novelty adoption outmatches how quickly defenders can secure it. Defenders simply don’t have enough time to prepare for every new attack surface that ground-breaking developments such as AI create.
“You get in a room with CISOs, and they don't know how many servers they have or are struggling with expiring certificates. That's the stuff that defenders are challenged with, the base sort of the basics, like keeping up with the attack surface. And that doesn't bode well for continued acceleration,” Herrin explained.
It’s not that the people behind cybersecurity are incompetent. Herrin notes that two new papers on AI are published every day. It would take superhuman capabilities to stay up to date with every possible development of AI and its security.
The good news is that attackers who leverage novel tech don’t seem to change themselves. Herrin points out that threat actors always look for the easiest way, the simplest exploit. That’s why companies should reinforce fundamentals: understand their assets, actors, interfaces, and actions.
“What I'm worried about with AI is that not only are companies going to be chasing the shiny thing and not focusing on the fundamentals, but when they're implementing their own models, they're changing where their data lives. They're changing how their data is exposed. They're changing who the data is available to. And anytime there's a changing attack, surface defenders tend to miss things,” Herrin explained.
However, the possible changes AI can spur almost guarantee that humanity will open every single Pandora’s box possible because the benefits seem to outweigh the costs.
“The power that we're unlocking here as a species is going to do more good than harm. But it's going to be a little rocky. We've never seen this kind of democratized power before. I think we'll land okay, but it's going to be an interesting ride, that's for sure,” Herrin concluded.
Your email address will not be published. Required fields are markedmarked