Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)Behind the Curtain: The scariest AI reality. (must-read at Axios) [View all]
https://www.axios.com/2025/06/09/ai-llm-hallucination-reasonPlease read the entire article. There's no paywall.
-snip-
Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity yet his company keeps boasting of more powerful models nearing superhuman capabilities.
-snip-
But a new report by AI researchers, including former OpenAI employees, called "AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is long and often too technical for casual readers to fully grasp. It's wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies.
It captures the belief or fear that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly.
You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. In the AI 2027 report, the authors warn that competition with China will push LLMs potentially beyond human control, because no one will want to slow progress even if they see signs of acute danger.
-snip-
Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity yet his company keeps boasting of more powerful models nearing superhuman capabilities.
-snip-
But a new report by AI researchers, including former OpenAI employees, called "AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is long and often too technical for casual readers to fully grasp. It's wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies.
It captures the belief or fear that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly.
You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. In the AI 2027 report, the authors warn that competition with China will push LLMs potentially beyond human control, because no one will want to slow progress even if they see signs of acute danger.
-snip-
The "new report by AI.researchers" this Axios article today refers to can be found here:
https://ai-2027.com/
It reads like science fiction set in the near future, but the AI companies and governments are racing to make it reality.
I'm tempted to compare what's going on to turning a gang of kindergarteners loose in a chemistry lab, but I think I'd need to add at least a few chimpanzees and some already-created and easily-triggered explosives for the comparison, and even then it wouldn't begin to capture how stupid and reckless the AI companies are being.
They don't know how their creations work.
They don't know how soon AI might exceed human intelligence.
They don't know if they can stop it from turning against humans.
They taught it to code to make recursive self-improvement possible - https://en.m.wikipedia.org/wiki/Recursive_self-improvement - something AI experts concerned about safety warned against. And they gave it access to the internet, which was also a safety concern.
But they want us to trust them.
What Max Tegmark wrote for Time magazine in April 2023 is more relevant than ever:
https://www.democraticunderground.com/100217862176
https://time.com/6273743/thinking-that-could-doom-us-with-ai/
If youd summarize the conventional past wisdom on how to avoid an intelligence explosion in a Dont-do-list for powerful AI, it might start like this:
☐ Dont teach it to code: this facilitates recursive self-improvement
☐ Dont connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power
☐ Dont give it a public API: prevent nefarious actors from using it within their code
☐ Dont start an arms race: this incentivizes everyone to prioritize development speed over safety
Industry has collectively proven itself incapable to self-regulate, by violating all of these rules. I truly believe that AGI company leaders have the best intentions, and many should be commended for expressing concern publicly. OpenAIs Sam Altman recently described the worst-case scenario as lights-out for all of us, and DeepMinds Demis Hassabis said I would advocate not moving fast and breaking things. However, the aforementioned race is making it hard for them to resist commercial and geopolitical pressures to continue full steam ahead, and neither has agreed to the recently proposed 6-month pause on training larger-than-GPT4 models. No player can pause alone.
☐ Dont teach it to code: this facilitates recursive self-improvement
☐ Dont connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power
☐ Dont give it a public API: prevent nefarious actors from using it within their code
☐ Dont start an arms race: this incentivizes everyone to prioritize development speed over safety
Industry has collectively proven itself incapable to self-regulate, by violating all of these rules. I truly believe that AGI company leaders have the best intentions, and many should be commended for expressing concern publicly. OpenAIs Sam Altman recently described the worst-case scenario as lights-out for all of us, and DeepMinds Demis Hassabis said I would advocate not moving fast and breaking things. However, the aforementioned race is making it hard for them to resist commercial and geopolitical pressures to continue full steam ahead, and neither has agreed to the recently proposed 6-month pause on training larger-than-GPT4 models. No player can pause alone.
12 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies

The high-tech idiocy of AI bros in the new film Mountainhead isn't that far from their reality.
highplainsdem
Jun 9
#8
I agree, but the AI bros want us to take the risk because so many of them are hoping to create a machine god
highplainsdem
Jun 9
#9
Few people can explain their reasoning because it is actually similar to "AI" way
Bernardo de La Paz
Jun 9
#4
Not the same thing at all. See this - and it's by someone who doesn't hate AI:
highplainsdem
Jun 9
#6
I did not say it was exactly the same and I did not say current AI reasons, the opposite actually. . . . nt
Bernardo de La Paz
Jun 9
#11