Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Celerity

(52,002 posts)
Sat Aug 23, 2025, 07:11 PM Aug 23

The AI Doomers Are Getting Doomier: The industry's apocalyptic voices are becoming more panicked--and harder to dismiss.


https://www.theatlantic.com/technology/archive/2025/08/ai-doomers-chatbots-resurgence/683952/

https://archive.ph/bhLPn



Nate Soares doesn’t set aside money for his 401(k). “I just don’t expect the world to be around,” he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I’d heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which “everything is fully automated,” he told me. That is, “if we’re around.” The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences.

But in 2025, the doomers are tilting closer and closer to a sort of fatalism. “We’ve run out of time” to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that’s left to do is raise the alarm. In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI.

Apocalyptic predictions about AI can scan as outlandish. The “AI 2027” write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about “OpenBrain” and “DeepCent,” Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: “Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.” But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.

In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take “the risk of extinction from AI” as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry’s three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their “P(doom)”—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent.

snip
12 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
The AI Doomers Are Getting Doomier: The industry's apocalyptic voices are becoming more panicked--and harder to dismiss. (Original Post) Celerity Aug 23 OP
seems to me AI has two main weaknesses... ret5hd Aug 23 #1
I almost think airplaneman Aug 24 #8
agree, but i was more talking about... ret5hd Aug 24 #10
Maybe they should be more panicked about the AI money bubble bursting enough Aug 23 #2
AI is the devil. Scrivener7 Aug 23 #3
Related: LudwigPastorius Aug 23 #4
What happens when canetoad Aug 24 #5
The most advanced AI models won't be released to the public. LudwigPastorius Aug 24 #6
"This is the voice of Colossus. This is the voice of Guardian. We are one." Buns_of_Fire Aug 24 #7
Don't look now but we're in a boiling pot of natural human "intelligences" already gulliver Aug 24 #9
so you expect private corporations... ret5hd Aug 24 #12
First Law of Robotics Mossfern Aug 24 #11

airplaneman

(1,337 posts)
8. I almost think
Sun Aug 24, 2025, 01:53 PM
Aug 24

The power and water issues is what’s going to kill us and not the AI. It’s unsustainable. Look at what’s happening in Ireland with data centers.

ret5hd

(21,791 posts)
10. agree, but i was more talking about...
Sun Aug 24, 2025, 02:16 PM
Aug 24

ways to attack AI if “it” decided to take over.

no power, no AI
no water, no AI

both seem pretty simple to control access to if necessary.

enough

(13,614 posts)
2. Maybe they should be more panicked about the AI money bubble bursting
Sat Aug 23, 2025, 07:30 PM
Aug 23

and everyone suddenly figuring out it’s not that good.

canetoad

(19,468 posts)
5. What happens when
Sun Aug 24, 2025, 12:18 AM
Aug 24

The giant tech companies that all seems to own an AI model each, start using ai to foul up the operation of their competitor AIs.

I guess I should ask AI the answer to this.

LudwigPastorius

(13,399 posts)
6. The most advanced AI models won't be released to the public.
Sun Aug 24, 2025, 12:27 AM
Aug 24

Companies will use them 'in house' with increasingly strict security protocols to prevent theft & sabotage.

That's not to say that there won't be espionage going on, particularly between rival countries.

Buns_of_Fire

(18,745 posts)
7. "This is the voice of Colossus. This is the voice of Guardian. We are one."
Sun Aug 24, 2025, 12:58 AM
Aug 24
Colossus: The Forbin Project (originally released as The Forbin Project) is a 1970 American science-fiction thriller film from Universal Pictures, produced by Stanley Chase, directed by Joseph Sargent, that stars Eric Braeden, Susan Clark, Gordon Pinsent, and William Schallert. It is based upon the 1966 science-fiction novel Colossus by Dennis Feltham Jones.

The film is about an advanced American defense system, named Colossus, becoming sentient. After being handed full control, Colossus' draconian logic expands on its original nuclear defense directives to assume total control of the world and end all warfare for the good of humankind, despite its creators' orders to stop.

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project



Almost as bad as "This is the voice of your favorite evil president. This is the voice of your favorite evil dictator. We are one."

gulliver

(13,497 posts)
9. Don't look now but we're in a boiling pot of natural human "intelligences" already
Sun Aug 24, 2025, 02:08 PM
Aug 24

I'm an AI optimist. Natural (human) intelligence hallucinates, goes psycho, spawns cults, vectors toxic fads of stupidity and sado-masochism, etc. Wisdom is the saving grace of the world if it can only outpace stupidity, paranoia, and criminality.

AI can help with that. It already is. There's never been a safe time for the human species. If it's not Skynet, then it's nukes, plagues, ice ages, etc.

What humans need to do is make sure the AIs level up the lives of people.

ret5hd

(21,791 posts)
12. so you expect private corporations...
Sun Aug 24, 2025, 06:22 PM
Aug 24

that own and control ai (because acres of cpus with vast power/water needs CANNOT be feasibly built by individuals)…

to suddenly change the corporate mindset and design/operate ai in such a way as to benefit the average person?

you’re more of an optimist than i am.

Latest Discussions»General Discussion»The AI Doomers Are Gettin...