the Pentagon, Anthropic
Digest more
The Defense Department has been feuding with Anthropic over military uses of its artificial intelligence tools. At stake are hundreds of millions of dollars in contracts and access to some of the most advanced AI on the planet.
If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker.
Defense officials in the Trump administration also warned they could designate Anthropic, which makes the AI chatbot Claude, as a supply chain risk — or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.
Tensions between the Pentagon and Anthropic (ANTH.PVT) are escalating. Axios tech policy reporter Maria Curi joins Market Domination host Josh Lipton to break down the conflict: how it began, its historical precedence,
Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The
Anthropic insists on limits on how its technology is used and could be labeled a supply chain risk if it fails to accept the military’s demands.
Anthropic CEO Dario Amodei says the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology.
Anthropic's negotiations with the Pentagon regarding AI safeguards have stagnated, with CEO Dario Amodei expressing concerns over the Department of Defense's final offer. The company is unwilling to accept terms that could allow its Claude model to be used for mass surveillance or autonomous weapons.