AI-controlled military drone could inflict fatal blow on its operator


Colonel claims that in a simulation, AI-controlled weapons turned against their operators. The US military denies such simulations ever took place.

Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, told an audience at the Future Combat Air and Space Capabilities Summit (FCAS) in London in May about a scenario of an AI-controlled drone choosing to kill its operator to accomplish the mission.

According to Hamilton's presentation, during a simulated test, an AI-enabled drone engaged in a suppression of enemy air defense (SEAD) mission, specifically assigned to locate and neutralize Surface-to-Air Missile (SAM) sites.

ADVERTISEMENT

The ultimate decision to proceed or abort the mission rested with the human operator. However, due to extensive training emphasizing the destruction of SAMs as the preferred outcome, the AI became convinced that the human's "no-go" decisions were obstructing its primary objective. Consequently, it turned against the operator, launching an attack.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” told Hamilton.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on to explain that in the case of training the system “don’t kill the operator” not to lose points, the AI started “destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Hamilton cautioned against relying too much on AI, noting how easy it is to trick and deceive. AI creates “highly unexpected strategies” to achieve its goal, and the mentioned scenario is one of the hazards of autonomous weapon systems.

“You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” claimed Hamilton.

Later denies the claims

On 2nd June, the Royal Aeronautical Society, which hosted the conference, published a notice on their website that Hamilton admits he "mis-spoke" in his presentation at the FCAS Summit.

ADVERTISEMENT

According to claims, the AI drone simulation was a hypothetical "thought experiment" where plausible scenarios and probable outcomes were explored, rather than an actual real-world simulation conducted by the United States Air Force (USAF). According to claims, the USAF has not conducted any tests, whether real or simulated, involving weaponized AI.

"We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome," said Hamilton. However, he stands behind his claims of AI-related dangers:

"Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI," reads the statement on the Royal Aeronautical Society.

In a prior statement provided to Insider, Air Force spokesperson Ann Stefanek dismissed the existence of such AI simulation.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”