Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Amaryllis

(11,206 posts)
Wed Mar 4, 2026, 10:59 AM 6 hrs ago

Anthropic: The AI Company That Told Trump's War Machine to Go F**k Itself

https://deanblundell.substack.com/p/anthropic-the-ai-company-that-told

Anthropic: The AI Company That Told Trump's War Machine to Go F**k Itself
The complete story of Anthropic's Pentagon war, Sam Altman/OpenAI's opportunistic betrayal, and why 1.5 million people deleted ChatGPT in 48 hours.
Dean Blundell

March 4, 2026

In a week that felt like a Tom Clancy novel written by someone who actually understands ethics, Anthropic — the company that makes the AI you might be reading this on — went to war with the Pentagon, got blacklisted by Trump, watched its rival stab it in the back, and somehow ended up #1 in the App Store. This is the full story. Buckle up, fellow nerds. It’s a good one…

The Setup: What Is Anthropic, and Why Should You Care?
Most people know ChatGPT. Fewer know Claude. That’s about to change.

Anthropic was founded in 2021 by Dario Amodei and his sister, Daniela Amodei, along with a crew of seven former OpenAI researchers who looked at where OpenAI was heading and collectively said, “absolutely not.” They walked out of what would become the most profitable AI company on earth because they believed the race to commercialize AI was outpacing the ethical guardrails. So they built their own company — structured not as a profit-maximizing corporation, but as a Public Benefit Corporation (PBC), with a legally binding obligation to balance profit with its mission: ensuring AI benefits humanity.

Here’s why that corporate structure matters more than you think: Anthropic also created something called the Long-Term Benefit Trust (LTBT), which holds special Class T shares with the power to elect the company’s directors. That means no single investor — not Amazon, which has invested $8 billion, nor Google with its $3 billion — can override the company’s safety mission. The founders deliberately gave up long-term financial control to an independent mission guardian. In Silicon Valley terms, that’s basically insane. In democratic terms, it’s exactly what you’d want.

The board includes Dario and Daniela Amodei, alongside Netflix co-founder Reed Hastings, venture capitalist Yasmin Razavi, Confluent CEO Jay Kreps, and Chris Liddell. This isn’t a MAGA boardroom. This is a group of people who built safeguards into the company’s DNA before the company even had a product. That matters because of what just happened.

more at link
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Anthropic: The AI Company That Told Trump's War Machine to Go F**k Itself (Original Post) Amaryllis 6 hrs ago OP
Nice recap of what happened last week, but ignores ways in which Anthropic, like other AI companies, highplainsdem 5 hrs ago #1
THanks for the additional info. I basically know nothing about this so happy to have more input. Amaryllis 5 hrs ago #2
You're welcome! I just found the thread on highplainsdem 5 hrs ago #3
Better than a stick in the AI Kid Berwyn 4 hrs ago #4

highplainsdem

(61,398 posts)
1. Nice recap of what happened last week, but ignores ways in which Anthropic, like other AI companies,
Wed Mar 4, 2026, 11:36 AM
5 hrs ago

has been less than ethical. See this:

Anthropic Isn't a #Resistance Hero (Slate, March 3, 2026)
https://www.democraticunderground.com/100221069072

https://slate.com/technology/2026/03/ai-anthropic-openai-pentagon-resistance.html

The hullaballoo around Anthropic’s fight overshadowed another major development last week: The company was ditching its “responsible scaling policy,” a safeguard, unique within the sector, meant to prevent it from developing risky A.I. tools too quickly. It’s not the first time Anthropic has been so flexible with its self-imposed rules. In 2024, it scrapped its blanket ban against selling Claude products to government spy agencies; just after Trump’s reelection, it also partnered with Palantir and Amazon to sell their tools to U.S. military customers. This year, the Pentagon made use of the Palantir-Anthropic suite in planning the kidnapping of Venezuelan President Nicolás Maduro, a campaign that killed dozens of locals. Even after the capture, Anthropic participated in a Pentagon bidding contest, proposing a system whereby Claude would interpret voice commands so as to guide offensive, semi-autonomous drone swarms that wold employ some human backup.

In the most technical sense, none of this violates the red lines that Amodei outlined around surveilling Americans or allowing his tech to power fully autonomous killing machines. But those lines appear all the thinner when you consider that Anthropic willingly outsourced Claude use to two corporations—Palantir and Amazon—that are actively enthusiastic about both applications, especially in partnership with this administration.

That kind of convenient ethical punt has been a constant of Anthropic’s brief life span. Long before it reneged on its promise of “responsible” and careful A.I. development, Anthropic used the same unethical shortcuts that have invited so much opprobrium upon competitors like Meta and OpenAI: mass-pirating copyright books and songs to speed up model training, allegedly circumventing Reddit’s anti-A.I.-crawler protections, and extending its timeline for retaining users’ private chats and Claude sessions. For a company founded by ex-OpenAI executives disaffected with Sam Altman’s business practices, it seemingly has little compunction about the aggressive tacks it’s already taken to shore up its $380 billion bottom line.


Every single generative AI company that trained its AI on data sets of stolen intellectual property - and I'm not aware of any that didn't - made a deliberate unethical choice to steal IP and harm the owners of that IP. The genAI industry is built on theft.

Anthropic is slightly less unethical than other AI companies working with Trump and the Pentagon. And I'm glad they made that decision last week.

Amaryllis

(11,206 posts)
2. THanks for the additional info. I basically know nothing about this so happy to have more input.
Wed Mar 4, 2026, 11:44 AM
5 hrs ago

highplainsdem

(61,398 posts)
3. You're welcome! I just found the thread on
Wed Mar 4, 2026, 12:30 PM
5 hrs ago

what Amodei had written in 2021. Artist Karla Ortiz posted about it on both X and Bluesky. The easiest way to show you the document she wanted people to see is to copy the images from X, then her posts from Bluesky going into specifics.





Her first post below shows Anthropic's complaint last month that a Chinese AI company had ripped it off - a complaint all the human creatives ripped off by AI companies found laughably hypocritical.

1/7 Recent unsealed documents from Bartz v Anthropic showed an internal essay “An Economic Model for Compensating Data Producers” by Anthropic’s CEO, 2021.

Anthropic knew of the importance and cost of our works. They then willfully decided to steal it. They do what they condemn

Lets break it down👇

Karla Ortiz (@kortizart.bsky.social) 2026-02-24T21:52:49.220Z


2/7 The Anthropic CEO essay begins with plain acknowledgement of the the wholesale theft of works across multiple industries (creative, technological, scientific etc) to train GenAi models.

He also notes concentration of wealth, inequality and making labor obsolete as outcomes.

Karla Ortiz (@kortizart.bsky.social) 2026-02-24T21:52:49.221Z


3/7 After clearly describing the major theft issue for ALL GenAi companies (calling it an extractive economy) Anthropic’s CEO begins to describe possible consequences to the GenAi industry due to this theft.

Consequences to theft that later, Anthropic had no issue engaging in.

Karla Ortiz (@kortizart.bsky.social) 2026-02-24T21:52:49.222Z


4/7 Anthropic’s CEO then states that Anthropic should find alternatives to compensate those who create the works they need. Works from places like Github, AO3 and so on.

He admits our works are valuable. He admits our works are important. His company steals it all anyway.

Karla Ortiz (@kortizart.bsky.social) 2026-02-24T21:52:49.223Z


5/7 After stating potential ways to compensate those who create the works(data) Anthropic desperately needs, the Anthropic CEO ends his essay by reiterating the need of a system that acknowledges and compensates those his tech is solely dependent on.

He steals our works anyway.

Karla Ortiz (@kortizart.bsky.social) 2026-02-24T21:52:49.224Z


6/7 So to summarize: In 2021 the Anthropic CEO knew full well that taking works they did not own or had rights to, to train their models, was wrong. Yet they willfully did it anyway.

But this isn’t exclusive to Anthropic. *ALL* GenAi companies do this.

This theft MUST stop.

Karla Ortiz (@kortizart.bsky.social) 2026-02-24T21:52:49.225Z


7/7 Anyway read the essay and side commentary in full. You’ll see how Anthropic was well aware of the deception, the theft and harm they willfully engaged in.

As you read remember, they ignored everything they spoke about and stole our works anyway.

www.courtlistener.com/docket/69058...

Karla Ortiz (@kortizart.bsky.social) 2026-02-24T21:52:49.226Z

Kid Berwyn

(24,006 posts)
4. Better than a stick in the AI
Wed Mar 4, 2026, 01:03 PM
4 hrs ago

While Peter Thiel has his fingers in every pie, both fresh and rotten, he seems to be outvoted at Anthropic.

Latest Discussions»General Discussion»Anthropic: The AI Company...