OpenAI’s CEO, Sam Altman, recently admitted the company rushed into a deal with the U.S. Department of Defense (DoD). He acknowledged this misstep during a talk in New Delhi, where he shared that revisions to this contract are necessary. Altman emphasized that the updated agreement will ensure that OpenAI’s AI systems won’t be used for domestic surveillance on U.S. citizens.
This shift comes after some controversy surrounding OpenAI’s quick deal, which coincided with political tensions and a change in government directives. Hours before the agreement was announced, President Trump urged federal agencies to stop using rival AI company Anthropic’s tools, and the U.S. was set to strike Iran.
Altman made it clear that OpenAI’s tools will not be employed by intelligence agencies, such as the NSA. He expressed concerns about the technology’s current capabilities and the need for more understanding regarding safety trade-offs: “There are many things the technology just isn’t ready for.”
This deal follows a recent fallout between Anthropic, another AI company, and the DoD, primarily focused on safety measures for AI systems. The Defense Secretary, Pete Hegseth, labeled Anthropic a supply-chain threat, raising further questions about the reasons for OpenAI’s preferential treatment. In the past, Anthropic’s AI was even used in a military operation to capture Venezuela’s president, Nicolás Maduro, which sparked debates on ethical AI use.
The rapid announcement of OpenAI’s deal led to backlash online, with many users switching from ChatGPT to Claude, Anthropic’s AI. Reports indicated there was a notable rise in uninstalls for ChatGPT soon after the deal was made public.
Altman, in response to the criticism, expressed a desire for Anthropic to be treated fairly, revealing that he conveyed this in discussions over the weekend. In a landscape where AI is rapidly evolving, these events highlight the ongoing struggle between innovation, ethics, and government oversight.
According to a recent survey by Stanford University, 62% of Americans believe AI tools should be subject to stricter regulations. As the technology grows, the need for governance and safety considerations in AI partnerships with government agencies becomes increasingly critical.
For further insight into the potential effects of AI on privacy, you can check the recent research findings published by the National Institute of Standards and Technology NIST.
Source link
White House,U.S. Department of Defense,Business,Aerospace and defense industry,Defense,Northrop Grumman Corp,Lockheed Martin Corp,Amazon.com Inc,Alphabet Class A,Microsoft Corp,United States,Technology,Breaking News: Technology,Booz Allen Hamilton Holding Corp,Palantir Technologies Inc,Foreign policy,Pete Hegseth,Sam Altman,Leidos Holdings Inc,Science Applications International Corp,Donald Trump,CACI International Inc,business news

