The wars of the future will hinge on technology, with human ethics taking a back seat.
This is the central thesis in the new episode of our podcast, “Through A Glass Darkly.” The war in Ukraine, and more recently between Israel and Hamas, has sparked a revived interest in the utilization of AI in warfare. This has notably led the US and 30 other nations to prescribe boundaries for the military use of the technology.
ADVERTISEMENT
Therefore, in today’s 45-minute-long episode, we’re discussing the opportunities that AI brings to warfare and the threats it poses to humanity.
- The militarization of AI will surely experience ethical and moral pushback. For example, Project Maven, the Defense Department's most visible AI tool, is designed to process imagery and full-motion video from drones and other surveillance assets and to detect potential threats. It hit the headlines when Google’s staff protested at being asked to build software that would improve drone targeting. Google listened to its rebellious staff and pulled out of the project to use AI to decipher aerial surveillance footage.
- The cost-efficiency argument and the potential of AI to avert wars. AI tools can already define whether a building is civilian or not, predict enemy troop movements, suggest how to take out a given target and estimate collateral damage, among other things. This could lead to cost-effective wars with as little collateral damage as possible. It might even recommend not starting a war since there’s only a slim possibility of winning. Why not use it then, if war can be short and with as little collateral damage as possible?
- Question of responsibility. The AI system guides the hand, but it’s a human being that pulls the trigger. However, people tend to overtrust machines, indirectly giving them too much power in war. Legally, if the wrong target is hit, or there’s unpredicted collateral damage – in other words, civilian casualties – who takes the blame? Will it be a soldier – the actor who gets paid the least – or someone higher up the food chain?
- How would AI erode the autonomy of human decision-making? Militaries plan to stitch many of these individual tools into an automated network – a kill web if you will. It’s not clear whether a human decision when relying on those webs is much of a decision at all since all of the factors have been calculated by the machine and not a human. An anonymous army operative called this choice between “approve” and “abort” a “believe” button.
- Would AI lead to a more cold-blooded military? The deployment of that so-called kill web – all the AI tools possible – might create a fiercer army. If all you have to do is choose whether to press the “believe” button or not, you, as a soldier, are relieved of the duty to calculate all the factors and decide for yourself. Does this essentially give you peace of mind and, therefore, a Bondian license to kill?
- Bringing AI to battle may mean fewer civilian casualties. However, there’s a hidden cost to conjoining human judgment and mathematical reasoning. What’s more ethical or moral – killing people by accident (collateral damage in war) or planning and knowing with precision who and when is going to die?
ADVERTISEMENT
- Should the world agree on AI parity? Perhaps something similar to the nuclear weapons parity?
More from Cybernews
ADVERTISEMENT
Your email address will not be published. Required fields are markedmarked