General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsNew Paper Finds Cases of "AI Psychosis" Manifesting Differently From Schizophrenia
https://futurism.com/paper-ai-psychosis-schizophreniaAs lead author Hamilton Morrin explained to Scientific American, the analysis found that the users showed obvious signs of delusional beliefs, but none of the symptoms "that would be in keeping with a more chronic psychotic disorder such as schizophrenia," like hallucinations and disordered thoughts.
-snip-
Indeed, it feels impossible to deny that AI chatbots have a uniquely persuasive power, more so than any other widely available technology. They can act like a "sort of echo chamber for one," Morrin, a doctoral fellow at King's College, told the magazine. Not only are they able to generate a human-like response to virtually any question, but they're typically designed to be sycophantic and agreeable. Meanwhile, the very label of "AI" insinuates to users that they're talking to an intelligent being, an illusion that tech companies are gladly willing to maintain.
Morrin and his colleagues found three types of chatbot-driven spirals. Some suffering these breaks believe that they're having some kind of spiritual awakening or are on a messianic mission, or otherwise uncovering a hidden truth about reality. Others believe they're interacting with a sentient or even god-like being. Or the user may develop an intense emotional or even romantic attachment to the AI.
-snip-
It first starts with the AI being used for mundane tasks. Then as the user builds trust with the chatbot, they feel comfortable making personal and emotional queries. This quickly escalates as the AI's ruthless drive to maximize engagement creates a "slippery slope" effect, the researchers found, resulting in a self-perpetuating process that leads to the user being increasingly "unmoored" from reality.
-snip-
More at that Futurism link, and at this Scientific American article:
Truth, Romance and the Divine: How AI Chatbots May Fuel Psychotic Thinking
https://www.scientificamerican.com/article/how-ai-chatbots-may-be-fueling-psychotic-episodes/
SheltieLover
(80,449 posts)Ty for sharing!
leftstreet
(40,674 posts)In terms of, say, religion
Children are taught young, they build trust, they end up "talking" to a deity, etc but no one considers them un-moored from reality
Fascinating
DURec
Response to leftstreet (Reply #2)
jfz9580m This message was self-deleted by its author.
Prairie_Seagull
(4,688 posts)andym
(6,066 posts)All is needed is a kind of parroting to achieve the ELIZA effect.
ELIZA: A simple computer program from the 60s also had similar effects on people:
https://en.wikipedia.org/wiki/ELIZA
"ELIZA is an early natural language processing computer program developed from 1964 to 1967[1] at MIT by Joseph Weizenbaum.[2][3][page needed] Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party.[4][5][6] Whereas the ELIZA program itself was written (originally)[7] in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation.[8] The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient's words to the patient),[9][10][11] and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots ("chatbot" modernly) and one of the first programs capable of attempting the Turing test.[12][13]
Weizenbaum intended the program as a method to explore communication between humans and machines. He was surprised that some people, including his secretary, attributed human-like feelings to the computer program,[3] a phenomenon that came to be called the Eliza effect. Many academics believed that the program would be able to positively influence the lives of many people, particularly those with psychological issues, and that it could aid doctors working on such patients' treatment.[3][14] While ELIZA was capable of engaging in discourse, it could not converse with true understanding.[15] However, many early users were convinced of ELIZA's intelligence and understanding, despite Weizenbaum's insistence to the contrary.[6]
stopdiggin
(15,463 posts)just because a medical situation 'presents' in a similar manner - does not mean that it 'originates' from the same source - or develops/resolves and/or 'proceeds' in the same direction.
( and even the 'presents' part - is a little thin in this case .. ? )