General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsBehind the Curtain: The scariest AI reality. (must-read at Axios)
https://www.axios.com/2025/06/09/ai-llm-hallucination-reasonPlease read the entire article. There's no paywall.
Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity yet his company keeps boasting of more powerful models nearing superhuman capabilities.
-snip-
But a new report by AI researchers, including former OpenAI employees, called "AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is long and often too technical for casual readers to fully grasp. It's wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies.
It captures the belief or fear that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly.
You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. In the AI 2027 report, the authors warn that competition with China will push LLMs potentially beyond human control, because no one will want to slow progress even if they see signs of acute danger.
-snip-
The "new report by AI.researchers" this Axios article today refers to can be found here:
https://ai-2027.com/
It reads like science fiction set in the near future, but the AI companies and governments are racing to make it reality.
I'm tempted to compare what's going on to turning a gang of kindergarteners loose in a chemistry lab, but I think I'd need to add at least a few chimpanzees and some already-created and easily-triggered explosives for the comparison, and even then it wouldn't begin to capture how stupid and reckless the AI companies are being.
They don't know how their creations work.
They don't know how soon AI might exceed human intelligence.
They don't know if they can stop it from turning against humans.
They taught it to code to make recursive self-improvement possible - https://en.m.wikipedia.org/wiki/Recursive_self-improvement - something AI experts concerned about safety warned against. And they gave it access to the internet, which was also a safety concern.
But they want us to trust them.
What Max Tegmark wrote for Time magazine in April 2023 is more relevant than ever:
https://www.democraticunderground.com/100217862176
https://time.com/6273743/thinking-that-could-doom-us-with-ai/
☐ Dont teach it to code: this facilitates recursive self-improvement
☐ Dont connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power
☐ Dont give it a public API: prevent nefarious actors from using it within their code
☐ Dont start an arms race: this incentivizes everyone to prioritize development speed over safety
Industry has collectively proven itself incapable to self-regulate, by violating all of these rules. I truly believe that AGI company leaders have the best intentions, and many should be commended for expressing concern publicly. OpenAIs Sam Altman recently described the worst-case scenario as lights-out for all of us, and DeepMinds Demis Hassabis said I would advocate not moving fast and breaking things. However, the aforementioned race is making it hard for them to resist commercial and geopolitical pressures to continue full steam ahead, and neither has agreed to the recently proposed 6-month pause on training larger-than-GPT4 models. No player can pause alone.

SheltieLover
(69,379 posts)
highplainsdem
(56,525 posts)
SheltieLover
(69,379 posts)
stopdiggin
(13,846 posts)The people that create and work with this - themselves claim to lack full understanding - and harbor grave concerns.
But no one is going to unilaterally disarm in the ongoing race.
- - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - -
highplainsdem
(56,525 posts)Karadeniz
(24,494 posts)highplainsdem
(56,525 posts)that will be benign and will reward humanity, especially the AI prophets and high priests, with unimaginable riches and power, immortality (though that might require merging humans and machines), godlike powers, conquest of the universe, and every science fiction dream.
And some of those AI advocates view AI as a superior being or species that it's humanity's duty to create, even if humans become irrelevant or are destroyed in the process.
Karadeniz
(24,494 posts)savior Superman. It was not Jesus's theology way back when. Jesus taught that individuals have the ability to choose soul values and save society and oneself from relying on the purely material as a super power. We've been conditioned for millenia to believe we're weak and need outside help.
Bernardo de La Paz
(56,274 posts)LLMs are of course extremely limited to generating stuff that if reversed would generate the prompt. That is to say that it closely resembles a proper report or analysis but might include fake references and other "hallucinations" to make it resemble better.
The key is that they run by massive networks of linked artificial neurons that have multiple inputs from ANs and multiple outputs to other ANs. Training the network involves assigning weights (strengths) to the links between ANs, doing a run, evaluating the result and adjusting the weights.
This is how the human brain works, which was the inspiration for the breakthrough of large artificial neural networks (large ANNs). Connections between neurons (synapses) are adjusted organically by learning from experience (and other sources like teachers and books).
So when a progressive and a maga look at current events and arrive at conclusions, neither can truly explain how they got there. They can rationalize it based on picking and choosing certain aspects. Also, rational direction of thinking can exert a great deal of influence on the process but a large amount of it is influenced by emotional experience.
It is called "explainable AI", but is basically unknowable and therefore an important research topic. The thing is that human thinking is also in large part unexplainable. We know the general mechanism but can't detail how a particular conclusion was arrived at.
Just like we treat humans as skin-encapsulated egos, we are likely going to have to treat much of AI as "black boxes".
highplainsdem
(56,525 posts)Bernardo de La Paz
(56,274 posts)andym
(5,989 posts)Colossus the Forbin Project for real.
https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
"Colossus: The Forbin Project (originally released as The Forbin Project) is a 1970 American science-fiction thriller film from Universal Pictures, produced by Stanley Chase, directed by Joseph Sargent, that stars Eric Braeden, Susan Clark, Gordon Pinsent, and William Schallert. It is based upon the 1966 science-fiction novel Colossus by Dennis Feltham Jones.
The film is about an advanced American defense system, named Colossus, becoming sentient. After being handed full control, Colossus' draconian logic expands on its original nuclear defense directives to assume total control of the world and end all warfare for the good of humankind, despite its creators' orders to stop."
Based on the novel "Colossus"
https://en.wikipedia.org/wiki/Colossus_(novel)
especially this part:
"After the scientists activate the transmitter linking Colossus and Guardian, the computers immediately establish rapport with mathematics. They soon exchange new scientific theories beyond contemporary human knowledge, too rapidly for the Russians and Americans to monitor."
which ends with
"Colossus announces to the world its assumption of global control, and orders Forbin to build an even more advanced computer on the Isle of Wight, evicting its residents. While debating Colossus about its plans for improving humanity, Forbin learns of a nuclear explosion at a USNA missile silo; Colossus detected the sabotage and detonated the tampered warhead, and punishes the USNA by firing a Soviet missile at Los Angeles. Anguished, Forbin asks Colossus to kill him. Colossus assures Forbin that, in time, he and humanity will respect, and even love, Colossus. Forbin vows "Never!", but the novel ends with an ambiguous "Never?"