General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAnthropic Isn't a #Resistance Hero (Slate, March 3, 2026)
https://slate.com/technology/2026/03/ai-anthropic-openai-pentagon-resistance.htmlThe hullaballoo around Anthropics fight overshadowed another major development last week: The company was ditching its responsible scaling policy, a safeguard, unique within the sector, meant to prevent it from developing risky A.I. tools too quickly. Its not the first time Anthropic has been so flexible with its self-imposed rules. In 2024, it scrapped its blanket ban against selling Claude products to government spy agencies; just after Trumps reelection, it also partnered with Palantir and Amazon to sell their tools to U.S. military customers. This year, the Pentagon made use of the Palantir-Anthropic suite in planning the kidnapping of Venezuelan President Nicolás Maduro, a campaign that killed dozens of locals. Even after the capture, Anthropic participated in a Pentagon bidding contest, proposing a system whereby Claude would interpret voice commands so as to guide offensive, semi-autonomous drone swarms that wold employ some human backup.
In the most technical sense, none of this violates the red lines that Amodei outlined around surveilling Americans or allowing his tech to power fully autonomous killing machines. But those lines appear all the thinner when you consider that Anthropic willingly outsourced Claude use to two corporationsPalantir and Amazonthat are actively enthusiastic about both applications, especially in partnership with this administration.
That kind of convenient ethical punt has been a constant of Anthropics brief life span. Long before it reneged on its promise of responsible and careful A.I. development, Anthropic used the same unethical shortcuts that have invited so much opprobrium upon competitors like Meta and OpenAI: mass-pirating copyright books and songs to speed up model training, allegedly circumventing Reddits anti-A.I.-crawler protections, and extending its timeline for retaining users private chats and Claude sessions. For a company founded by ex-OpenAI executives disaffected with Sam Altmans business practices, it seemingly has little compunction about the aggressive tacks its already taken to shore up its $380 billion bottom line.
-snipping paragraph saying Anthropic does deserve credit for standing up to Hegseth's demands last week-
But to celebrate Anthropics move through a mass virtuous-capitalism campaign is to give it too much credit; the company did, after all, willingly lend itself to this administration and its most openly craven partners until the final minute. And considering Anthropics lifelong track record of forgoing the principles that supposedly animate its existence (including the responsible development ethos it cast off last week), no one with any standards should expect this conscientious objection to last either. Enjoy Claude if you want; its a remarkable chatbot. Just dont expect it to do anything further to preserve our democracy, or anyones life, or your efforts to prevent A.I. from ruining everything.
OC375
(683 posts)Like I said, everyone wants what the military has, which is ironic in many instances, and on several levels.
As an aside, is fascinating watching people root for and against corporations now. Im team Weyland-Yutani.
highplainsdem
(61,398 posts)on DU post that they want something because the military has it, and I have no idea which people you mean by "everyone."
OC375
(683 posts)It's used, directly or indirectly, to market everything from the general to the specific. Tents, software, training, boots, sunglasses, flashlights, radios, etc... "Everybody wants what the military has."
The Anthropic software just very publicly, and allegedly, accomplished some major military feats (firsts?) ie: The government uses this stuff. For real!!! Anthropic wasn't front page news to the majority of people until that, IMHO. It's the most downloaded after all the military use.
As such... It's ironic when a company objecting to military use, is now benefiting from that specific use, at least to some extent, if you accept my premise, which you are obviously free to reject. It's also ironic, to me, given the military is hot/cold with procurement of quality/crap systems, ethic aside entirely. Also ironic to me is willfully choosing to jump into a data pool and algorithm that's already been optimized for government use, and will likely have to continue to participate, at least for the time being, as the war rages. Etc... "Ironic on many levels."
I certainly didn't intend any offense to anyone here. My apologies if I offended.
highplainsdem
(61,398 posts)very well known and highly regarded before this disagreement with the Pentagon. Especially for coding. I honestly don't think the military use of it made it more desirable to most (if any) of the people switching to Anthropic now. What you're seeing is liberals who'd been using ChatGPT switching to Anthropic because they consider Anthropic more ethical. I doubt any of them are thinking Anthropic must be better to use because Hegseth and the Pentagon had wanted it.
More people would have heard of Anthropic because of the headlines. You're right about that.
But most people who'd be using Claude, ChatGPT, Gemini or Grok - all of which are being used by the military - would be aware of all four, and some might be using all four for various reasons. There have been lots of news stories about the military using all four. I've literally never seen a social media post suggesting they're better AI models because the military uses them. Maybe other countries' governments evaluate them that way, but I've never seen typical AI users do so.
This subscription surge for Claude is a thumbs-up for Anthropic taking an ethical stand, as opposed to OpenA being less ethical.