Cyber pro and FBI vet on militarization of AI: “Let’s face it, it’s an arms race”


Innovation, efficiency, jobs – everyone’s talking about AI's civilian applications. But a cybersecurity expert and an FBI veteran tell Cybernews in an interview that the technology is also being militarized, and it’s worrisome.

“Will machines select targets with no human intervention in the name of efficiency? What if rogue nations such as North Korea or Iran decide to do that? It’s definitely a concern,” says James Turgal, VP of global cyber risk and board relations at Optiv and a 22-year FBI veteran.

Turgal is sure that a cyber arms race has only intensified, thanks to breathtaking advancements in developing and using large language models.

ADVERTISEMENT

According to him, recent efforts by the US to engage China in ensuring that AI use in warfare would not cause a global catastrophe have predictably hit a dead end. Beijing wants to keep the momentum and use technology to disrupt its adversaries.

If the US State Department wants China and Russia to make a commitment that AI will not make decisions about nuclear employment, it’s not coming – same with any kind of guarantee that AI would not be used in managing and deploying nuclear weapons.

“China, Russia, North Korea – they are utilizing those new tools at an alarming rate. It’s a battlefield, and we’re in the middle of it,” Turgal told Cybernews.

The middle of it? In this particular race for AI supremacy, America is behind – significantly. A new report has shown that most GenAI patents come out of China, which outpaces the US in terms of innovation.

According to Turgal, China’s security apparatus is undoubtedly “using and abusing” the technology. But he has hope: “Make no mistake, the Pentagon is all over it. The combination of our robust military program and our lively private sector is how we win this.”

Defending forward

James, the militarization of AI, or the automatization of warfare, is probably unavoidable. What concerns you about this?

Look at the weaponization of the internet alone – we're long past that, right? Threat actors are able to attack governments and the private sector every day. The internet as a tool has been weaponized for as long as it's been around.

ADVERTISEMENT

We need to change our mindsets about this and not just be afraid of the fact that we're weaponizing AI because there’s a way to weaponize it both positively and negatively.

There are areas where AI can be weaponized to enhance military operations, but large language models can also help us understand and extract more data about how our adversaries work.

There's a whole intelligence aspect to this as well. AI is being used to gain intelligence to understand vulnerabilities, but on the flip side, it's also being used by threat actors to rewrite their models on phishing and on looking at vulnerabilities in software and behavioral traffic on networks.

There was recently a meeting between US and Chinese officials in Geneva to start a conversation about AI and mitigate the global risks of its use in warfare. The meeting didn't go so well, apparently. Because of intense tech competition between the two countries, it doesn’t look like any kind of technical cooperation is going to be possible for the foreseeable future. Do you agree?

Yeah. The ability to stay ahead of the threat in this particular race – and let's face it, this is a cyber arms race we're in the middle of – is key. The FBI learned a lot after the September 11th terrorist attacks about how to stop thinking from a responding aspect and to defend forward.

The goal is to get into the space of your adversaries, to understand who they are, to literally get between them and the victims out there. The idea is to have this type of persistent engagement in order to stay ahead of the threat and to just think differently.

Data is everything. I've worked on hundreds of cyber cases throughout my career, and now, more than ever, the whole concept is proven with the advancement of AI and large language models. These tools are nothing without the data. The numbers are just so astronomical.

I absolutely don't believe there is going to be any kind of agreement from a global standpoint on the use of those tools. Because guess what? China, Russia, North Korea – they are utilizing those at an alarming rate right now.

china-chips
China seems to be leading the AI race at the moment. Image by Cybernews.

There are a number of threat actors out there that are connected to the Russian and Chinese intelligence services and are using large language models and AI to enhance their attacks against the US. If those countries are saying that they're going to agree not to use AI in an offensive capability, that's just a bunch of crap.

ADVERTISEMENT

I recently interviewed an AI auditing specialist, though. She said she visited China just this year and saw that their regulation of Generative AI is much more robust than anything the US has. Is that the case?

From a research standpoint, that may indeed be what her unclassified research tells her. But China, as a government, wants to restrict the use of commercial AI and have a robust ability to control AI used in the Chinese private sector.

Of course, they don't want it used against the government. So, in this way, she’s right – I'm sure there is robust regulation of the use of AI and large language models as it applies to the private, non-military sector in China, as opposed to the US.

We have companies now, such as OpenAI, that are utilizing the new tools, and they’re exploding. There are a ton of them because there’s a difference in philosophy – we actually want our private sector to be able to use, understand, and enhance these tools.

Make no mistake, the Chinese government, the military, and the Ministry of State Security are using and abusing AI at an alarming rate. What’s more, they’re probably using AI against their own people.

The Chinese government only wants to restrict the use of AI in their own private sector because they don't want those tools used against the government.

“Space is the battlefield now”

The US is trying to push China’s advance with sanctions and export bans. There are numerous examples of how Beijing just plays around these restrictions. Is there a problem?

There is, certainly. I don't think the sanctions have caused enough damage from an economic standpoint to restrict China on the chips aspect.

Still, if we talk about the level of technology, China is still 20 years behind us and the Five Eyes countries (it’s an Anglosphere intelligence alliance comprising Australia, Canada, New Zealand, the United Kingdom, and the United States).

ADVERTISEMENT

That's why we see China's prolific cyberattacks to steal intellectual property. Beijing has a tech plan to double its achievements in certain areas every five years.

Anybody who understands anything about technology knows that's virtually impossible. There are not enough people in the world. China's only way to do it is by stealing things, and so the amount of IP theft is astronomical.

All organizations inside China are doing just that. Again, how do you combat that? You need to stay ahead of the threat, you've got to have persistent engagement.

However, the US Department of Defense is all over this. The Pentagon's got at least 800 active military AI projects, and those are the unclassified ones. We’ve already got fully autonomous lethal weapons such as specially modified Lockheed Martin F-16s flown by an AI agent.

Beijing has a tech plan to double its achievements in certain areas every five years. But anybody who understands anything about technology knows that's virtually impossible.

James Turgal.

Plus, there's a whole classified side of this. I’m sure we’re leading the world in having both a robust military AI development program and a strong private sector.

The number of educated individuals working for think tanks or private companies far exceeds the number working on these issues for the US government or the DoD.

Our philosophy allows us to have that balance with the private sector. That's how we stay ahead and win this.

The Pentagon's annual report on Chinese military power said that, already in 2022, China began discussing multi-domain precision warfare and the use of big data and AI to rapidly identify key vulnerabilities in American military systems. What could those be?

To me, it's anything that has to do with space. That’s the most vulnerable area where threats and wars are going to happen. Already now we talk a little about the ability to utilize certain weapons and take out satellites. Still, information moving in space and space itself is probably the most highly vulnerable area.

ADVERTISEMENT

Related to this is the issue of nuclear launches. It is obviously a very sensitive and secretive area, and little is known about how far different countries want to automate nuclear decision-making. Is it important to have rules on this globally, or is unpredictability actually more important to large countries like the US, China, or Russia? I mean, if you're unpredictable, your adversary is not going to make the first move because it’s not going to know what your response might be, right?

I'm more worried right now about the use of AI and large language models by smaller rogue countries such as Iran and North Korea. The use of those tools may enhance their ability to become a nuclear power. That’s the battlefield right now.

Those countries are unpredictable. Yes, they have conventional weapons, but historically, the US and its allies have been able to restrict the flow of know-how of how you actually build nuclear systems.

But there’s enough data out there that can be put together in one of those countries in order to enhance its ability to build weapons of mass destruction. Plus, AI would help them learn things at a much faster pace.

Where do you think this is all going in terms of automating warfare and decision-making in warfare? Surely humans still need to be in ultimate control, don’t they?

Yes. Russia still uses the Dead Hand concept in nuclear launches, but it has some human beings on a button. But people can be fallible as well.

I envision a combination of technology and some fail-safe options so that nothing happens by accident.