Anthropic, a leading AI company, recently faced accusations from the Trump administration regarding its generative AI model, Claude. An executive from Anthropic clarified in a court filing that once Claude is operational in the U.S. military, the company has no control over it. Thiyagu Ramasamy, Anthropic’s public sector head, stated that they can neither turn off Claude nor change its functions during military operations.
The Pentagon has expressed concerns about possible risks in using Claude for national security. Secretary of Defense Pete Hegseth labeled Anthropic a “supply-chain risk,” preventing the Department of Defense from using its software. This situation has left some federal agencies reconsidering their partnerships with the company.
In response, Anthropic filed lawsuits questioning the legality of the ban. A court hearing is scheduled for March 24 in San Francisco, where a judge may address the issue.
Government lawyers emphasized the need for national security, arguing that they cannot take risks that might jeopardize military operations. The Pentagon has relied on Claude for vital tasks like data analysis and battle planning. They fear that Anthropic might disrupt these operations by altering or shutting down Claude if it disagrees with certain uses.
Ramasamy firmly rejected these concerns. He insisted there is no “kill switch” or way for the company to alter Claude remotely, meaning military operations remain secure from external interference. He explained that any updates to Claude would need government approval and that Anthropic does not have access to sensitive military data.
This tension highlights broader questions about the role of AI in defense. Experts note that as AI technology rapidly evolves, issues of control and responsibility become increasingly complex. According to a recent survey by the Pew Research Center, nearly 60% of Americans are wary of AI in military applications, fearing it could lead to unintended consequences.
There’s also a historical context here: past military conflicts have seen the introduction of new technologies, often met with skepticism and calls for regulation. In the 1940s, radar technology faced similar scrutiny over its military applications. Today, as AI becomes more integrated into defense, these debates are likely to become even more critical.
As negotiations continue, Anthropic remains open to ensuring that its technology is used responsibly, indicating its commitment to ethical standards in AI deployment. Despite tensions, both sides recognize the potential benefits AI can offer in national defense—a relationship that will need careful management as technology advances.
For further details on the impact of AI in military settings, you can read more from the Pew Research Center here.
Source link
anthropic,pentagon,war,artificial intelligence,claude

