Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Celerity

(51,917 posts)
Mon Aug 18, 2025, 10:28 PM Aug 18

AI Is a Mass-Delusion Event


Three years in, one of AI’s enduring impacts is to make people feel like they’re losing it.

https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/

https://archive.ph/otnmB



It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager—Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida—has been reanimated by generative AI, his voice and dialogue modeled on snippets of his writing and home-video footage. The animations are stiff, the model’s speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I’m losing my mind watching it.

Jim Acosta, the former CNN personality who’s conducting the interview, appears fully bought-in to the premise, adding to the surreality: He’s playing it straight, even though the interactions are so bizarre. Acosta asks simple questions about Oliver’s interests and how the teenager died. The chatbot, which was built with the full cooperation of Oliver’s parents to advocate for gun control, responds like a press release: “We need to create safe spaces for conversations and connections, making sure everyone feels seen.” It offers bromides such as “More kindness and understanding can truly make a difference.” On the live chat, I watch viewers struggle to process what they are witnessing, much in the same way I am. “Not sure how I feel about this,” one writes. “Oh gosh, this feels so strange,” another says. Still another thinks of the family, writing, “This must be so hard.” Someone says what I imagine we are all thinking: “He should be here.”

The Acosta interview was difficult to process in the precise way that many things in this AI moment are difficult to process. I was grossed out by Acosta for “turning a murdered child into content,” as the critic Parker Molloy put it, and angry with the tech companies that now offer a monkey’s paw in the form of products that can reanimate the dead. I was alarmed when Oliver’s father told Acosta during their follow-up conversation that Oliver “is going to start having followers,” suggesting an era of murdered children as influencers. At the same time, I understood the compulsion of Oliver’s parents, still processing their profound grief, to do anything in their power to preserve their son’s memory and to make meaning out of senseless violence. How could I possibly judge the loss that leads Oliver’s mother to talk to the chatbot for hours on end, as his father described to Acosta—what could I do with the knowledge that she loves hearing the chatbot say “I love you, Mommy” in her dead son’s voice?

The interview triggered a feeling that has become exceedingly familiar over the past three years. It is the sinking feeling of a societal race toward a future that feels bloodless, hastily conceived, and shruggingly accepted. Are we really doing this? Who thought this was a good idea? In this sense, the Acosta interview is just a product of what feels like a collective delusion. This strange brew of shock, confusion, and ambivalence, I’ve realized, is the defining emotion of the generative-AI era. Three years into the hype, it seems that one of AI’s enduring cultural impacts is to make people feel like they’re losing it. During his interview with Acosta, Oliver’s father noted that the family has plans to continue developing the bot. “Any other Silicon Valley tech guy will say, ‘This is just the beginning of AI,’” he said. “‘This is just the beginning of what we’re doing.’”

snip
14 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

Prairie Gates

(6,231 posts)
1. "This is just the beginning of AI!"
Mon Aug 18, 2025, 10:38 PM
Aug 18

They've been working on AI for 60 years, actually. The latest schlock is the best they've been able to come up with. The hype campaign (done by human PR people and advertisers) is far more impressive than the garbage it spit out after 60 years of development.

WarGamer

(17,852 posts)
5. It's outstanding at some things... we have deep historical dives
Mon Aug 18, 2025, 11:00 PM
Aug 18

I've tested the knowledge I've accumulated and she's always on target.

I do believe the Google ethicists make it a little too "shrill"

For history, math, coding... philosophy, it's excellent. For step by step help on tech projects it's A+...

rog

(870 posts)
9. It's really good for aggregating information. This week I started ...
Tue Aug 19, 2025, 12:41 AM
Aug 19

... using Google's NotebookLM to transcribe and summarize my doctor's appointments.

I usually record my appointments and was looking for (free) software to convert .mp3 files to text. Not too much luck, then thought, "I wonder if NotebookLM can do that?" I started a session with my .mp3 as the only source. In seconds I had a not perfect, but pretty darn good word for word transcription. Definitely usable to browse my appointment, and any gray areas could be double-checked against the recording. It was also able to generate an excellent Briefing Document, which is a very detailed outline type summary of the conversation with my doc, organizing and highlighting the main points, complete with quotes from the transcript. I think this is a game changer, re: reviewing medical conversations.

The next step is to try uploading recordings of four different consultations with my neurosurgeon, regarding a possible fusion procedure. Again, it does not offer opinions or suggestions, it just aggregates and organizes your information into an easily digestible format which you can cross-reference with your original sources.

NotebookLM does 'not' go out on its own to scour the net ... it only deals with sources you upload or link. It can use pdf files that you upload or link, audio recordings, web page links, text files that you upload or link, youtube videos, academic studies that you can link or upload, and create summaries, reports, study guides, quizzes, etc, from many different sources, but 'only' the sources you specify. Although I find it really annoying and don't use it, I find it really impressive that it can even generate an audio 'podcast', in which two AI 'hosts' discuss the topic based on the sources you supply, all in a remarkably short time.

Normally I try to keep AI out of my life, but this particular model is going to be very useful.

Bernardo de La Paz

(58,254 posts)
14. Naysayers have been moving the goalposts for 60 years
Tue Aug 19, 2025, 06:43 AM
Aug 19

First, AI beat checkers players.
Then it diagnosed diseases.
Then it proved mathematical theorems.
Then it beat the world champion chess player.
Then it could understand speech and carry conversation.
Then it discovered new metal alloys and protein folding configurations.
Then it beat the world's champion Go/Baduk player.
Then it drove cars on streets with better safety than human drivers.
Then it directed humanoid robots to play soccer (badly).
And more.

But it's never enough to satisfy the naysayers. They always call it hype. "Oh it will never be able to do anything better than new proofs for old math." And so on.

Yes, there is hype.
Yes, there is over-selling.
Yes, it is "schlocky".

Yes, there is another AI winter coming. But every AI winter is milder than the one before and the industry goes from there to new accomplishments.

People who think AI has reached the pinnacle of its achievement are always proven wrong.

Xolodno

(7,148 posts)
4. It's also killing off jobs.
Mon Aug 18, 2025, 11:00 PM
Aug 18

CEO's buy into the hype that they can replace a number of employee's with cheaper vendor sponsored AI. Yes, AI can spit out numbers and code fairly easily. But is it right? Is it finding insights and patterns in the data that a human would do who looks beyond conventional norms?

Most I would say had templates where they could drop things in and get the basic information that was usually required. May have taken 10 to 15 minutes longer than AI. But that's when the real work began, coming up with a hypothesis, pulling info that was not typically used, sometimes your hypothesis is wrong, but sometimes you are right and find a leg up on the competition.

eppur_se_muova

(39,963 posts)
8. That's exactly why so much money was put into its development. Good for the next quarterly report, long term - ?
Mon Aug 18, 2025, 11:55 PM
Aug 18

Potentially a disaster.

jfz9580m

(15,955 posts)
13. We used to buy stuff we needed or at least wanted
Tue Aug 19, 2025, 01:48 AM
Aug 19

Last edited Tue Aug 19, 2025, 06:12 AM - Edit history (1)

Sorry to channel Tyler Durden here, but it is ridiculous.

Now stuff we don’t need, which actively slows down our work and makes life miserable (while in the ultimate insult offering a creepy therapy AI chatbot/agent) is forced on us by the creepy humans behind the AI. One thing is true-with AI and tech of that kind, the humans are matched with AI in creepiness and incompetence.

The only upshot is that it puts in stark relief all that was already wrong with society by taking it to its absurd extreme.

I feel there is only mild hyperbole in that in my case (too complicated to explain).

Ed Zitron rants about AI here and it is mildly soothing if your nerves are as inflamed as mine are:
https://www.wheresyoured.at/


(I boost Ed since i find his work useful, but am already maxed out re subscriptions. And time sucking tech makes work and life harder.)


I generally dislike The Atlantic for reasons Nathan Robinson laid out over at Current Affairs Magazine, so I don’t feel this even touches on how useless all this garbage tech is.

I was thinking about how much worse my work has become after repeated brushes with tech (which is not even my field). I just want a simple phone/computer/life ffs. Not be sucked into the giant asshole of stupidity and hubris which is the tech creep’s ego.

Where it doesn’t destroy jobs by replacing you (which it cannot since it’s, you know, a worthless scam), it does so by being a nuisance.

The irony is that while I used to be inefficient, technology has made me super-inefficient. It’s all so irredeemably fucking stupid.

AI reminds me of this:

https://www.theguardian.com/news/2017/nov/23/from-inboxing-to-thought-showers-how-business-bullshit-took-over

Over the course of 10 two-day sessions, staff were instructed in new concepts, such as “the law of three” (a “thinking framework that helps us identify the quality of mental energy we have”), and discovered the importance of “alignment”, “intentionality” and “end-state visions”. This new vocabulary was designed to awake employees from their bureaucratic doze and open their eyes to a new higher-level consciousness. And some did indeed feel like their ability to get things done had improved.

But there were some unfortunate side-effects of this heightened corporate consciousness. First, according to one former middle manager, it was virtually impossible for anyone outside the company to understand this new language the employees were speaking. Second, the manager said, the new language “led to a lot more meetings” and the sheer amount of time wasted nurturing their newfound states of higher consciousness meant that “everything took twice as long”. “If the energy that had been put into Kroning had been put to the business at hand, we all would have gotten a lot more done,” said the manager.

Although Kroning was packaged in the new-age language of psychic liberation, it was backed by all the threats of an authoritarian corporation. Many employees felt they were under undue pressure to buy into Kroning. For instance, one manager was summoned to her superior’s office after a team member walked out of a Kroning session. She was asked to “force out or retire” the rebellious employee.


It’s not just that it is dickish - surveillance, coercion etc. I don’t think all that even makes us more productive or well. They are just a way for unscrupulous people with products and services no one would choose to insinuate themselves into one’s workplace, home and personal life by soft, passive-aggressive force..

dickthegrouch

(4,111 posts)
6. "suggesting an era of murdered children as influencers"
Mon Aug 18, 2025, 11:23 PM
Aug 18

It's not like we have any precedents to go by:
Jesus Christ, if you're a believer.

Karasu

(2,003 posts)
11. This country's stubborn refusal to regulate it is a price no one can afford to pay.
Tue Aug 19, 2025, 01:13 AM
Aug 19

Completely batshit insane. It epitomizes the absolute worst of end-stage capitalism.

C Moon

(13,169 posts)
12. Something I was wondering about tonight, is what if a segment of AI creates a virus to harm AI.
Tue Aug 19, 2025, 01:26 AM
Aug 19
Latest Discussions»General Discussion»AI Is a Mass-Delusion Eve...