OpenAI removes ban on military and warfare applications


The Pentagon must be happy over OpenAI’s decision to quietly remove previous language in its usage policies that banned military and warfare uses of the company’s tools.

OpenAI, probably the most famous AI research company in the world, will now allow more flexibility for military applications of its technology. With little fanfare or even explanation, the firm updated its usage guidelines.

They no longer include a prohibition on “military and warfare” uses of OpenAI’s technology. The policy now only notes a ban on utilizing the tools – such as its Large Language Models – to “develop or use weapons.”

The previous policy prevented any government or military agency from using OpenAI's services for defense or security purposes.

Accidentally or not, this change in wording comes just as military agencies around the world are showing an interest in using AI. The Israeli military has said it's using AI to select targets in Gaza in real-time, and AI tools have helped Ukraine in its war against Russia.

Defense departments, including the Pentagon which is awash with money and lucrative offers to contractors, are seeking to utilize generative AI in administrative or intelligence operations.

In other words, OpenAI’s tools could help governments to be more efficient in kinetic military operations even if the technology itself wasn’t directly used to, say, eliminate targets – ChatGPT cannot fire a missile or maneuver a drone.

In November, the Pentagon issued a statement on its mission to promote the responsible military use of AI and autonomous systems.

The statement explained: "Military AI capabilities include not only weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, and systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data.”

However, Sarah Myers West, managing director of the AI Now Institute, told The Intercept, an outlet that first broke the story, that the language in the updated policy is unclear and leaves room for interpretation and abuse.

“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” she said.

“The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.”

In 2023, we discussed how the militarization of AI would surely experience ethical and moral pushback in our podcast “Through A Glass Darkly.” On the other hand, using AI tools might help the warring sides to minimize civilian casualties.