Hegseth’s Bold Warning: Will Anthropic Face a Blacklist Over ‘Woke AI’ Issues?

Admin

Hegseth’s Bold Warning: Will Anthropic Face a Blacklist Over ‘Woke AI’ Issues?

The U.S. military is facing a clash with Anthropic, an AI company, over ethical standards. Defense Secretary Pete Hegseth has threatened to blacklist Anthropic due to its refusal to relax safety measures. This was discussed during a recent meeting with Anthropic’s CEO, Dario Amodei, who has been firm on his stance against using AI for mass surveillance or as weapons.

During their discussion, Amodei reiterated that he believes these uses are “illegitimate” and could lead to serious abuses of power. Hegseth, on the other hand, insists that the military should be able to use Anthropic’s AI for “lawful purposes,” which could potentially include military operations and surveillance.

Pentagon sources have indicated that they plan to continue utilizing Anthropic’s tools despite these disputes. If Anthropic persists in its stance, Hegseth may invoke the Defense Production Act, a law dating back to the 1950s. This would force the company to provide its technology for military use regardless of its wishes.

This situation aligns with broader trends in the tech industry. Recently, other AI firms like OpenAI and Google have agreed to allow their tools to be used in various scenarios, including military ones. Interestingly, the term “woke AI” has emerged among Trump officials, describing anything that resembles safety measures on powerful AI systems.

Experts argue that “woke AI” is a vague label often used to criticize safety protocols and claims of bias in AI chatbots. This controversy adds pressure on Anthropic, especially since the company is preparing to go public this year. How will investors react to this friction with the administration?

Amodei has noted that Anthropic’s growth in valuation and revenue has continued, even as it remains steadfast against reducing its ethical safeguards. He has expressed concerns about the risks of too few controls over AI, stating, “My main fear is having too small a number of ‘fingers on the button’,” highlighting the potential dangers of autonomous weapons without adequate oversight. This caution may resonate as discussions around AI safety continue to intensify.

For further insight, you can check out an essay by Dario Amodei about his perspective on AI technology here.



Source link