Pentagon, AI and Anthropic
Digest more
The Pentagon wants Anthropic to remove some of the limits on how Claude can be used by the U.S. military.
Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of pri
As high stakes as it gets. The post Anthropic Blowout With Military Involved Use of Claude for Incoming Nuclear Strike appeared first on Futurism.
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
The Pentagon may decide to officially designate Anthropic as a "supply chain risk" to push them out of government, sources say.
Anthropic has pressed for assurances its Claude AI won't be engaged in mass surveillance of Americans or used in autonomous weapons without human oversight
Amodei’s release makes clear that Anthropic is not against the use of its AI by the U.S. military, explaining that “Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications,
Defense Secretary Pete Hegseth gave Anthropic until Friday at 5 p.m. to grant the military unresticted use of its AI technology.
In January, Anthropic “retired” Claude 3 Opus, which at one time was the company’s most powerful AI model. Today, it’s back — and writing on Substack.
Claude Sonnet 4.6 is more consistent with coding and is better at following coding instructions, Anthropic said.
Claude Code receives new Remote Control features for long-running tasks; start with /remote-control and open a session URL on mobile, for easy project