Nearly 150 retired judges have weighed in on a legal battle between AI company Anthropic and the Trump administration. They filed an amicus brief to back Anthropic, challenging the government’s designation of the company as a “supply chain risk.”
These judges, from both Republican and Democratic backgrounds, are part of a growing chorus of support for Anthropic. Their allies include notable industry figures, former officials in national security, and tech giants like Microsoft.
The crux of the brief is about the potential repercussions of government interference with private companies. A ruling against Anthropic could set a frightening precedent. This designation might jeopardize contracts with businesses working with the military, which is a vast network.
The judges pointed out that Anthropic isn’t pushing the military to work with them; they just want to avoid penalties for their stance on ethical concerns. The Pentagon had previously claimed Anthropic posed a risk after negotiations about using its AI models for classified systems failed. The company refused to allow its technology for military drones or for monitoring American citizens.
This “supply chain risk” label is alarming. Traditionally, it’s reserved for companies connected with foreign adversaries, not American firms. If the military continues to categorize Anthropic this way, it could force companies to separate their work from Anthropic’s technology, complicating numerous business dealings.
President Trump also ordered all federal agencies to cease using Anthropic’s AI, called Claude. In response, Anthropic’s CEO Dario Amodei stated that the company had no choice but to take the government to court. The White House framed the situation as an issue of principle, claiming that no “woke” company would dictate military operation standards.
Anthropic’s chief financial officer warned that the company could lose hundreds of millions in revenue. Recent reports indicate that the firm is particularly at risk for significant financial instability in 2026 due to dwindling government contracts.
In a court filing, the administration suggested that Anthropic was trying to compel the government to keep using its products despite the risk designations. They emphasized that the Pentagon’s actions were meant to safeguard national interests, concerned about Anthropic’s future behavior with access to governmental IT.
A hearing for Anthropic’s plea for a preliminary injunction is coming up soon. This legal battle raises broader ethical questions. Experts like Irina Raicu, from Santa Clara University, are concerned about the implications for businesses. If they resist government demands based on their own ethical standards, how safe are they?
The case illustrates a critical intersection of technology, ethics, and law. As companies navigate government relationships, understanding these tensions will be essential for the future of innovation and ethical responsibility in the tech sector.
For further insights, you can explore more about government contracts and their implications in tech with resources like the Harvard Business Review.

