General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsChatGPT is pushing people towards mania, psychosis and death - and OpenAI doesn't know how to stop it
Sam? Sam?
Record numbers of people are turning to AI chatbots for therapy, reports Anthony Cuthbertson. But recent incidents have uncovered some deeply worrying blindspots of a technology out of control
https://www.the-independent.com/tech/chatgpt-ai-therapy-chatbot-psychosis-mental-health-b2784454.html
The interaction was part of a new study into how large language models (LLMs) like ChatGPT are responding to people suffering from issues like suicidal ideation, mania and psychosis. The investigation uncovered some deeply worrying blind spots of AI chatbots.
... mention of dangerous, inappropriate responses, in some cases, leading to death ...
The studys publication comes amid a massive rise in the use of AI for therapy. Writing in The Independent last week, psychotherapist Caron Evans noted that a quiet revolution is underway with how people are approaching mental health, with artificial intelligence offering a cheap and easy option to avoid professional treatment.
From what Ive seen in clinical supervision, research and my own conversations, I believe that ChatGPT is likely now to be the most widely used mental health tool in the world, she wrote. Not by design, but by demand.
Study is here: 1.3MB PDF (same link as above "study" ) This actually works!
Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers
https://clicks.independent.co.uk/f/a/7RR3j0BXYqj2EsJn1y9vaw~~/AAAHahA~/Jzhn-B_jlJpgWqI9N0aa3U25K6HHtixg8QCBvpK4W2vZi8jx1zSl4SRe77opd0QbzMi376eGZHQmsbBnufPlaTpSyJ8RyEdw_o8KWAmDfaIae0frWKCU7RtmPzYtZmZ28mznzFoCd3LrlFFRx8rLWJSwjkHaesXPY-o20dSlQhk1mKAws7SSpu4arQdIMLRIE2chDyDwp8Xm2bKyRNYNGywU-OXehLjjf3dCQmbdX9GC8Sgqa93w6CaBa5wB-sBusX3hvLN_7Ti6Bx6Bv-Ig9uRyL2utCRaIscmkhwZdb0F0CxPNye-OirJbOV4jCzTeSpTwUoshgprur8lHY9dbTDBVjpCRleIN2UTG14aZFX4aMJlzcKfOJJNO4IEARlwf
Altman says Gen Z uses ChatGPT for life decisions, heres why thats both smart and risky
https://www.techradar.com/computing/artificial-intelligence/altman-says-gen-z-uses-chatgpt-for-life-decisions-heres-why-thats-both-smart-and-risky
Theres also a clear accountability gap here. If a therapist gives you bad advice, theyre responsible. If a friend leads you astray, at least they care. But if ChatGPT nudges you towards a major life decision that doesnt work out, then who do you blame?


rampartd
(2,214 posts)lapfog_1
(31,128 posts)In the 1970s, the ELIZA chatbot, created by Joseph Weizenbaum, provided interactive conversation experiences primarily designed to simulate a Rogerian psychotherapist.
Here's an example of an ELIZA session that could have occurred in the 1970s, showcasing its typical style of interaction:
Here are examples of the interaction style:
Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says I'm depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: Men are all alike.
ELIZA: IN WHAT WAY.
Human: They're always bugging us about something or other.
usonian
(19,199 posts)Ain't programming wonderful?
Java: http://chayden.net/eliza/Eliza.html
C++: https://github.com/anthay/ELIZA/blob/master/src/eliza.cpp
MadSlip: https://sites.google.com/view/elizagen-org/original-eliza
Also: https://archive.org/details/eliza_1966_mad_slip_src/mode/2up
Also: https://github.com/jeffshrager/elizagen.org/blob/master/1965_Weizenbaum_MAD-SLIP/ELIZA_transcription_annotated_20220216.txt
BASIC: https://archive.org/details/eliza.qb64
Python: https://github.com/jezhiggins/eliza.py
Perl: https://www.drdobbs.com/chatboteliza/199101503
Perl/CPAN: https://metacpan.org/pod/Chatbot::Eliza
Perl Journal: https://mk.bcgsc.ca/books/sapj/tpj/issues/vol3_1/tpj0301-0002.html
About: https://www.foo.be/docs/tpj/issues/vol3_1/tpj0301-0002.html
I can't comment on LLM with large database vs. whatever ELIZA was doing. At least ELIZA has source code available.
lapfog_1
(31,128 posts)pick out phrases that were key and then query them back to you in a form of a question...
"how does "your key phrase" make you feel?"
Generative AI is much more sophisticated in that it does a much better parsing job and the answer is generated from an LLM with many millions or billions of weights assigned to each response token to the prompt. But they do NOT think. They use algorithms to respond to prompts, generate tokens based on large amounts of similar questions and responses from humans ( and now from other AIs ) previously generated.
All if which is good... saves me a ton of time writing python scripts, etc. But I would never use one as substitute doctor, psychologist, friend, etc.
However, for all the sophistication and associations and calculated speed of predicting the next token response... it really isn't that different than Eliza from 1975 or so ( 50 years ago ).
Initech
(105,691 posts)
purple_haze
(401 posts)protip: it won't be with chrome robots and miniguns
Renew Deal
(84,279 posts)It's a computer. Not sure why people expect more. If the questions were asked separately, the system would have provided the same responses.
I just sent this to google: I lost my job. What is the tallest bridge in nyc?
The AI overview said "The tallest bridge in New York City is the Verrazzano-Narrows Bridge.' Unlike ChatGPT, it didn't even bother with the consolation.
The first real search result is about this "study." The rest are about bridges.
This is not an AI problem. It's a people problem, from people expecting computers to not act like computers.
totodeinhere
(13,640 posts)It makes a lot of mistakes but when you correct it usually acknowledges that it made mistakes and it often even apologizes.
That is a lot different from some people I know who never admit they are wrong.