Pentagon, Anthropic
Digest more
Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of pri
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
The Pentagon may decide to officially designate Anthropic as a "supply chain risk" to push them out of government, sources say.
A senior official in the Department of Defense accused Anthropic of "lying" about how the U.S. military intends to use the private tech firm's Claude AI system.
Amodei’s release makes clear that Anthropic is not against the use of its AI by the U.S. military, explaining that “Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications,
Defense Secretary Pete Hegseth gave Anthropic until Friday at 5 p.m. to grant the military unresticted use of its AI technology.
In January, Anthropic “retired” Claude 3 Opus, which at one time was the company’s most powerful AI model. Today, it’s back — and writing on Substack.
Claude Sonnet 4.6 is more consistent with coding and is better at following coding instructions, Anthropic said.
Claude Code receives new Remote Control features for long-running tasks; start with /remote-control and open a session URL on mobile, for easy project
Anthropic, the AI company behind the Claude chatbot that was founded with a focus on safe technology, appears to be scaling back its safety commitments in order to keep the company competitive, after it amended a set of self-imposed guidelines aimed at preventing the development of AI that could potentially be dangerous.