Anthropic’s CEO, Dario Amodei, is back in talks with the U.S. Department of Defense (DoD) after a recent breakdown in negotiations about using their AI tools. This discussion is crucial as it determines how the military will access Anthropic’s Claude models. The Financial Times reports that Amodei is engaging with Emil Michael, a key defense official, to reach a new agreement.
Last week, tensions escalated when President Trump ordered federal agencies to stop using Anthropic’s technology. Defense Secretary Pete Hegseth labeled the company as a risk to national security. This fallout came after Michael criticized Amodei harshly on social media, calling him a “liar” with a “God complex.”
A new contract would allow the military to continue using Anthropic’s tech, which has already been instrumental in U.S. operations, including reported activities in Iran. Claude, Anthropic’s AI model, was first integrated into classified defense networks through a $200 million contract. However, Anthropic wants to ensure its tools aren’t used for domestic surveillance or autonomous weapons. The Pentagon, on the other hand, insists the military needs flexibility to use the technology legally.
In an internal memo, Amodei expressed concern that the DoD wanted to remove a critical clause regarding the use of “bulk acquired data.” This was the very issue Anthropic was worried about. He also criticized messaging from the Pentagon and OpenAI, implying it was misleading.
This turbulent situation comes on the heels of OpenAI securing its own deal with the DoD, which raised eyebrows among Anthropic supporters. Following the announcement of OpenAI’s contract, there was a noticeable backlash, with many users opting to uninstall ChatGPT. In contrast, Claude saw a significant uptick in downloads. OpenAI’s CEO, Sam Altman, later admitted that they may have rushed their agreement and promised revisions in their terms.
Anthropic was founded in 2021 by ex-OpenAI staff who sought a safer approach to AI. Critics have often pointed out that the company may be overly cautious about AI safety. A tech industry group, which includes major players like Nvidia and Google, recently voiced concerns over the designation of Anthropic as a supply-chain risk.
The conversation surrounding AI’s role in defense continues to evolve, reflecting broader societal concerns about technology and security. As these discussions unfold, the balance between innovation and safety remains a pressing issue.
For further reading on AI’s implications for national security, check out the Department of Defense’s AI Strategy.
Source link
Pete Hegseth,United States,Palantir Technologies Inc,Microsoft Corp,Amazon.com Inc,Alphabet Class A,Technology,Booz Allen Hamilton Holding Corp,Lockheed Martin Corp,Leidos Holdings Inc,NVIDIA Corp,Northrop Grumman Corp,Science Applications International Corp,Foreign policy,business news

