The U.S. military is currently using Anthropic’s Claude AI model in its operations against Iran. This is notable because the Pentagon recently announced a ban on this kind of technology. The specifics of how Claude is being used remain unclear, but there have been public concerns around its potential misuse in surveillance or autonomous weaponry.
This situation escalated after a disagreement between Anthropic and the Pentagon. Anthropic wanted to set strict guidelines to ensure their technology wouldn’t be used for mass surveillance on Americans or to control autonomous weapons. In fact, Anthropic’s CEO, Dario Amodei, emphasized the importance of maintaining American values when deploying their technology.
Despite these concerns, Pentagon officials believe they should be allowed to use Claude for “all lawful purposes.” They argue that it’s already illegal to surveil Americans and that internal military policies limit the use of fully autonomous weapons. The Pentagon’s chief technology officer, Emil Michael, stated that trusting the military to act responsibly is essential.
Meanwhile, President Trump has mandated that federal agencies stop using Anthropic’s technology, giving them six months to phase it out as Defense Secretary Pete Hegseth labeled the company a risk to the supply chain. Reports also suggest that finding an alternative AI platform could take several months.
The Israeli Defense Forces (IDF) are also involved in this conflict and do utilize AI technology. However, it’s unclear if they are using Claude specifically. They have their own system called “Lavender,” which they deployed during the Gaza War.
With the growing role of AI in military operations, experts warn of the potential ethical implications. Many call for clearer guidelines and policies to ensure responsible use of such powerful technology. As AI becomes more integrated into warfare, discussions about its use are more important than ever.
In recent surveys, a significant percentage of Americans have expressed concerns about AI in military settings. According to a recent study by the Pew Research Center, about 60% of people worry that AI could exceed human control in defense applications. This highlights the importance of maintaining a balance between innovation and ethical considerations.
As this situation evolves, it raises critical questions about technology, safety, and accountability in the modern military landscape. The conversation around AI and its implications for national security is just beginning, and it’s a topic worth watching closely.
Source link
Pentagon, Iran, Claude, Artificial Intelligence, Pete Hegseth, Anthropic

