Anthropic rejects latest Pentagon offer
Digest more
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
The Pentagon may decide to officially designate Anthropic as a "supply chain risk" to push them out of government, sources say.
Claude maker and Anthropic CEO Dario Amodei advised students and young professionals to focus on human-centered skills, as software engineering and math are rapidly being taken over by AI. His remarks came after he warned that white-collar jobs could disappear in the coming five years.
In January, Anthropic “retired” Claude 3 Opus, which at one time was the company’s most powerful AI model. Today, it’s back — and writing on Substack.
Anthropic is locked in an escalating public dispute with the United States Department of Defense — recently rebranded by the administration as the “Department of War” — over restrictions on how its AI model, Claude, may be deployed for military purposes.
Anthropic CEO Dario Amodei said on Thursday the company "cannot in good conscience accede" to the military's terms over the use of Claude.
A hacker exploited Anthropic PBC’s artificial intelligence chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information,
DeepSeek, Moonshot and MiniMax created more than 16 million interactions with Claude using roughly 24,000 fake accounts, the U.S. company said in a blog post.
Washington DC: US-based Artificial Intelligence company Anthropic has said it cut had off access to its AI model- Claude for firms linked to the Chinese Communist Party (CCP), forgoing several hundred million dollars in revenue as part of efforts to safeguard America's technological lead.