Pentagon, Anthropic
Digest more
Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The
Debates have long swirled around AI and its use in weapons targeting, the idea of no human involvement still an uncomfortable one.
Until this week, Anthropic was the only AI company cleared to deploy its models on classified networks. Elon Musk's xAI is now the second.
Defense Secretary Pete Hegseth gave Anthropic until Friday at 5 p.m. to grant the military unresticted use of its AI technology.
The company's Claude chatbot is one of the few AI systems cleared for use in classified settings. But a standoff between Anthropic and the Trump administration is putting its government work at risk.
Anthropic fears the unrestricted military use of its AI systems by the US government may harm democracy. Military officials have threatened to invoke Cold War-era legislation to force Anthropic to comply.
If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker.
Anthropic CEO Dario Amodei said on Thursday the company "cannot in good conscience accede" to the military's terms over the use of Claude.