We’re all aware by now that AI can both help and hurt. It promises to streamline tedious processes, but also places millions of jobs at risk. And that’s just the beginning. Much worse is to come if the machines are taught to kill.
Unknown: Killer Robots is a new documentary on Netflix. It’s certainly different from the other film in the Unknown series and, to be fair, most of the other documentaries on the platform.
On Netflix, we’re pretty accustomed to seeing myriads of clone-like true crime miniseries or historical expeditions filled with bizarre claims about lost legends. Yep, we’re talking about Ancient Apocalypse.
Killer Robots is authentic in that it’s very fresh and is asking really relevant questions about the use of artificial intelligence tech in the military. It’s basically like watching a terrible beauty being born.
This is because, beneath all the fanfare and love for ChatGPT, there’s an international AI arms race going on. The greatest tech innovator has always and inevitably been the military – so we should watch it closely.
Worms squirming in the can
The arms race is quite fun: the US, China, and partly Russia (partly, because it’s been struggling at conventional warfare in Ukraine) are all competing to come up with the fastest and most efficient ways to destroy their enemies without losing any manpower of their own.
And if you teach a machine to do it autonomously – all the better. What’s not to like? Well, it turns out a lot can go wrong. I personally think it will – because the innovations described in the film are not speculative.
The can of worms has been opened, and what’s inside is chilling. Isaac Asimov must surely be turning in his grave and ripping his Three Laws of Robotics to pieces – it turns out they were fictional for a reason.
Asimov’s first law says: “(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Rubbish, Brandon Tseng, former US Navy Seal and co-founder of Shield AI, which is developing AI tech for military use, basically says. Tseng served in Afghanistan and knows firsthand how deadly close-quarters combat inside a building can be.
Shield AI is working on AI-powered drones and fighter jets that could operate autonomously without GPS, communications, or a human pilot – inside the aircraft or on the ground.
Superefficient and fast drones will help troops avoid ambushes and boobytraps in buildings, and the fighter jets, piloted by a computer, are shown to beat an experienced US Air Force pilot in successive simulated dogfights.
It’s safe to say Tseng is a proponent for the use of AI in the military. He’s eager for others to realize that if the US doesn’t innovate, then China most definitely will. Russia is in the game, too – as are smaller yet richer countries that can afford and need to be ambitious.
Dual use everywhere
This is not a pro-AI film, though. Neither is it technophobic, to be fair, but it’s immediately refreshing – if challenging – to hear what the other side has to say. The other side here being AI scientists and thinkers.
Or even other soldiers. Here’s Paul Scharre, former US army Ranger, who remembers he and his men had a deal not to shoot when insurgents sent an eight-year-old girl as a human shield to scout for danger.
The low-hanging question here is this: a robot would have seen the girl as a legal and legitimate target, wouldn’t it? This is a bit too far as the AI could undoubtedly be taught to exclude children as adversaries. The machines are still learning, afterall, even if they aren’t likely to be as self-aware as humans are.
Other examples, illustrating the “dual use” nature of AI (and, actually everything because fire and even human intelligence is dual use), seem better.
A weaponized and autonomised headless robot killer dog would be an amazing warrior – what about a battalion of them?
The Massachusetts Institute of Technology has created its latest robot dog that can quickly navigate places too dangerous for humans via machine learning – but, if armed with a machine gun, could just as quickly eliminate multiple enemies. A weaponized and autonomised headless robot killer dog would be an amazing warrior – what about a battalion of them?
It’s a bit telegenic, yes, but the stories of Sean Ekins and Fabio Urbina, a pair of pharmaceutical developers, are a bit more down to Earth – but no less scary.
Ekins and Urbina use AI to tweak molecules and create tremendously specific drugs to fight a variety of diseases. This is obviously great but the scientists then decided to reverse the AI procedure as an experiment – to “just flip a 0 to a 1.”
For fun, sure. Except that the result wasn’t funny at all – they found that the programme produced chemical formulas for 40,000 deadly toxins overnight, on a six-year-old Apple Mac. You know VX, the extremely lethal nerve agent? Think bigger, much bigger, the scientists say.
“We were totally naive … Anyone could do what we did. How do we control this technology before it is used to do something totally destructive?” said Ekins.
Ground Control to Major Tom
I recently rewatched an older HBO documentary of the same cloth. The Truth About Killer Robots was released in 2018 and shows how humans are becoming more dependent on automation.
Back then, the movie talked about a single case of a robot being used to kill someone – in Dallas, the police strapped C4 onto a bomb-detecting robot and triggered it next to a mass shooter who had barricaded himself in a building.
Five police officers had been killed before the robot was deployed but, thanks to the machine, more casualties were avoided. But it’s interesting that the bomb squad that lent the robot to the cops refused to commit to the mission themselves.
An interesting point, even more valid today, was raised by the director of the HBO film himself, Maxim Pozdorovkin. He said that sending a robot to kill someone feels uncomfortable: “You can’t quite pinpoint it but it touches into some kind of fundamental, uncanny, discomfort.”
I don’t know about that, of course. The robot didn’t decide to strap some explosives onto it and neutralize a suspect by himself; plus, we could talk to hundreds of CIA drone operators who kill brown people in the Middle East via remote control and with impunity.
The point somehow sticks, though. Obviously, autonomous robots and smart drone swarms are still a near-future phenomenon but the mere possibility of them wreaking havoc while we’re having a beer in the back garden feels a lot like losing control.
Emilia Javorsky of the Future of Life Institute, the very same think tank that initiated an open letter calling on all AI labs to pause giant AI experiments, says on the Netflix film that AI is one of the existential risks facing humanity.
“We’ve spent the last 70 years building the most sophisticated military on the planet, and now we’re facing the decision as to whether we want to cede control over that infrastructure to an algorithm, to software,” says Javorsky.
Her Future of Life Institute also has a new film out, by the way. Artificial Escalation depicts a world where AI is integrated into nuclear command, control, and communications systems.
And when disaster strikes, military commanders around the world discover that, thanks to the new AI system, everything has sped up, and there’s not enough time to prevent a major catastrophe. Yes, it’s fiction – but is it?
More from Cybernews:
Subscribe to our newsletter