OpenAI is changing its recent deal to provide AI technology to the U.S. Department of War (DoW). This comes after CEO Sam Altman admitted the agreement seemed “opportunistic and sloppy.” The initial contract raised concerns it could lead to domestic surveillance. Altman promised to prohibit such uses, especially by intelligence agencies like the NSA.
OpenAI quickly stepped into the contract once Anthropic, the previous Pentagon AI contractor, was removed. Anthropic had strongly opposed using AI for mass surveillance, stating it’s against democratic values. Their stance led to criticism from former President Trump, who referred to them as “leftwing nut jobs” and instructed the government to cease using their tech.
Despite OpenAI’s assurances, the deal sparked fears reminiscent of the Edward Snowden revelations in 2013, when the NSA was revealed to be collecting vast amounts of personal communications. Users on platforms like X and Reddit reacted strongly, with some launching campaigns to “delete ChatGPT.” Posts included messages questioning the ethical implications of OpenAI’s technology being used for military purposes. In a surprising twist, Anthropic’s chatbot, Claude, surged to the top of the Apple App Store, surpassing ChatGPT in popularity.
Altman acknowledged the haste in finalizing the deal, expressing in a message that the complexities of the situation required clearer communication. He aimed to prevent greater fallout, but the initial rush gave a hasty impression.
In response to the deal, a group of nearly 900 employees from OpenAI and Google signed an open letter urging their leaders to reject any use of their products for harmful purposes, emphasizing the need to maintain ethical standards in technology deployment. They noted the risk of the government manipulating tech companies into compliance through fear and competition.
In an attempt to clarify its stance, OpenAI reiterated that the agreement included “more guardrails than any previous contract” regarding classified AI usage. However, former OpenAI policy head Miles Brundage expressed doubts, suggesting that the company may have backtracked on ethical concerns raised by Anthropic.
A recent survey shows that nearly 70% of tech workers believe AI should not be used in warfare or surveillance. This sentiment reflects a broader concern about the ethical implications of technology in military applications, especially in the current political climate. With the potential for AI to be used in autonomous weapons systems, experts stress the importance of strict regulations and accountability.
As the situation unfolds, it remains to be seen how these tech giants will balance innovation with ethical responsibility, especially as trust from their users hangs in the balance.
Source link

