OpenAI’s CEO, Sam Altman, recently told his employees that the company doesn’t influence how the Pentagon uses its artificial intelligence (AI) products. This statement comes at a time when there’s a lot of concern about the military’s growing reliance on AI and the ethical implications of its usage.
“You don’t get to make operational decisions,” Altman explained, emphasizing that opinions on military actions, like those in Iran or Venezuela, aren’t OpenAI’s to weigh in on. His remarks reflect the tension surrounding the deployment of AI in military contexts, especially as there are strong feelings among AI workers about ethics and responsibility.
In recent weeks, the Pentagon has been asking AI companies, including OpenAI, to remove certain safety measures from their models to broaden their military use. This has raised alarms. Reports suggest that AI systems have already played a role in military operations, influencing decisions both in Venezuela and Iran.
Rival AI company Anthropic, known for its Claude chatbot, recently turned down a deal with the Pentagon. Concerns centered on the potential use of their technology for mass surveillance or fully autonomous weapons. In a surprising move, U.S. Defense Secretary Pete Hegseth labeled Anthropic as a “supply-chain risk,” a rare classification that could significantly impact the company.
On the same day, the Pentagon struck a deal with OpenAI meant to replace Anthropic’s involvement in military projects. This sudden shift generated backlash from the public and OpenAI employees, who felt uneasy about the ethical lines being crossed.
In response, Altman and OpenAI have stressed that their technology will comply with legal standards. Altman even admitted that the deal was rolled out too quickly, creating a perception of being “opportunistic.”
Anthropic’s CEO, Dario Amodei, criticized Altman, suggesting he had been dishonest in his comments and accusing OpenAI of compromising its ethical standards for military contracts. The tension between the two companies highlights a broader debate in the tech industry. Many experts argue that as AI continues to integrate into military operations, it’s essential to prioritize ethical concerns.
Surveys indicate that public opinion on using AI in warfare is increasingly skeptical. A recent study showed that about 60% of people are uncomfortable with military uses of AI, fearing it could lead to unintended consequences.
As this conversation unfolds, it’s clear that the choices made by AI firms like OpenAI and Anthropic could shape the future of technology and its role in conflict. Keeping dialogue open and prioritizing ethical standards will be crucial as these companies navigate this complex landscape.
For a deeper understanding of the implications of AI in military settings, you can check out this report from the Brookings Institution, which explores the challenges and ethical considerations surrounding AI technology in defense.

