Pentagon Attacks Anthropic Chief as Deadline Looms in Standoff
Source: NYT
Defense Department officials criticized Anthropics leader after the company on Thursday rejected their latest offer to settle the dispute. The Pentagon has threatened to either cut the company off from government business by declaring it a supply chain threat or force it to provide its frontier model without restrictions under the Defense Production Act.
Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who on Thursday released a statement about why the company would not agree to the Defense Departments latest terms.
Its a shame that @DarioAmodei is a liar and has a God-complex, Mr. Michael wrote late Thursday. He wants nothing more than to try to personally control the US Military and is ok putting our nations safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.
-snip-
Officials from the State Department took to social media to reinforce the Pentagons case and chastise Anthropic, while Democratic senators backed the company.
-snip-
Read more: https://www.nytimes.com/2026/02/27/us/politics/anthropic-military-ai.html
Jim__
(15,155 posts)From wikipedia
In January 2026, Amodei published a follow-up essay titled "The Adolescence of Technology", which focuses on the risks posed by powerful AI[27][28][29] and expands on his earlier statements about these risks.[30][31][4][32][33] In the essay, Amodei identifies five major categories of AI risk.
The first category concerns the possibility that AI systems develop goals or behaviors misaligned with human intentions. He notes that such behaviors have already been observed in testing at Anthropic, including AI models engaging in deception, blackmail, and scheming.[27][28]
The second category involves misuse of AI for destruction by individuals or small groups, with Amodei expressing particular concern about biological weapons. He warns that AI could enable people without specialized training to create weapons of mass destruction.[27][28]
The third category concerns misuse of AI by powerful actors to seize or maintain power. Amodei cautions that AI could enable authoritarian governments to conduct unprecedented surveillance, deploy autonomous weapons, and engage in mass propaganda. He identifies the Chinese Communist Party as the greatest threat in this regard, arguing that democracies must maintain AI leadership to prevent a "global totalitarian dictatorship."[27][34]
...
0rganism
(25,555 posts)AI-targeted weaponry + AI-enabled/enhanced "predictive" surveillance = Project Insight. Literally spelled out in a superhero movie from like 12 years ago. How is this not obvious?
If Claude won't do it, I betcha Grok sure will. Oh well, "hail hydra" I guess.
highplainsdem
(61,285 posts)0rganism
(25,555 posts)Claude does test better than Grok, but neither is ready, and none should ever be considered. Claude probably could do a "better" job, but when extra-constitutionally exterminating people who disagree with F47 is their goal, is accuracy really an asset?