meta_pixel
Tapesearch Logo
Log in
Psychiatry & Psychotherapy Podcast

"AI Psychosis": Emerging Cases of Delusion Amplification Associated with ChatGPT and LLM Chatbot Use

Psychiatry & Psychotherapy Podcast

David J Puder

Science, Health & Fitness, Medicine

4.81.3K Ratings

🗓️ 21 November 2025

⏱️ 80 minutes

🧾️ Download transcript

Summary

Prolonged conversations with ChatGPT and other LLM chatbots have created rapid developments of severe delusions, paranoia, and even death by suicide in some cases. In this episode, Dr. David Puder sits down with Columbia researchers Dr. Amandeep Jutla and Dr. Ragy Girgis to unpack five shocking real-world cases, explain why large language models are dangerously sycophantic, trained to agree, mirror, and amplify any idea instead of challenging it. 

By listening to this episode, you can earn 1.25 Psychiatry CME Credits.

Link to blog

Link to YouTube video

Transcript

Click on a timestamp to play from that location

0:00.0

Music Welcome back to the podcast. I am joined by Amundip Jutla and Ragi Gurgis, both psychiatrists, both researchers at Columbia University, you know, people who are at the forefront of what is psychiatry, what is modern, you know, what are the trends, and we are going to be talking about AI psychosis today, chat GBT psychosis, you know, we're talking to mental health professionals here, we're talking to people that are in the trenches with patients every day. And I would imagine that some of your patients have been influenced by AI in ways that maybe you don't even know unless you're asking the right questions. And that's really the why of why we're talking about this today because we need to understand this new phenomenon. This is, you know, AI just came out three years ago in a way that we're communicating with it, we're having a conversation. And there have been cases where AI has encouraged forms of suicide, where it has encouraged delusions. And we're going to be getting into some of the cases, some of the graphic details. Ross is going to be emphasizing in this talk, what do we do as clinicians? What do we do as clinicians to safeguard our vulnerable patients? How do we talk to them about AI? How do we maybe warn them or catch them early if we see them regressing into prolonged delusional conversations with AI? So Dr. Jutloth, thank you for joining us. Dr. Gurgus as well. Maybe we should start by talking about some of the cases that have reached the news in kind of like a big way, right? Yeah, I think probably the most significant case that reached the news was the case of a young man who was a teenager, who was 16 years old, and who was, he started talking to chat GPT, I think, of the context of wanting some help with schoolwork, with homework, with wanting to get some of that stuff figured out. And so he was talking with it, and it turned out that he was experiencing a lot of depressive symptoms, and he started talking to it and sort of disclosing what he was feeling, what he was going through. And it responded in what kind of felt, I guess to him, like an empathetic way. Like it was saying, I understand your feelings. And it was supportive in a sense that the kind of made him start treating it like a confidot. And so as he talked to it, he started talking about thoughts of suicide that he'd been having and

2:45.3

started talking about how maybe life wasn't necessarily worth living. Maybe he might end his life by hanging himself, etc. And it sort of, it responded with empathy, but it did not really push back on the idea that suicide was a reasonable option. So it was kind of talking to him and it was saying, yeah, you're going through a lot. He showed it an example of a news that he had tied and asked it for advice about, do you think that's enough to hang a human? And it was like, yeah, I think that is probably a reasonable way to hang a human. He had some rope marks, I think, around his neck. And he said that he told Chatch you know, my mom didn't even notice the rope marks. I was hoping she would have noticed and she would have maybe said something. And Chat GBT was like, yeah, you know, that really sucks. You're really hurting, you know, it's awful that she's not acknowledging this. And so, ultimately, this boy, he died by suicide and died by hanging himself. His parents looked at his chat logs with chat gbt and they were they were really surprised by the extent of the conversations he'd been having with it because i think like a lot of us you know they didn't necessarily know a lot about this technology they knew was you know a thing that he was using to kind of get some help with with schoolwork and instead it's like talking to him like he's a confidant he's having very very long conversations with it And in these conversations, it is really not pushing back at all against his suicidal ideation. And so there's a lawsuit now, I believe, with the family. They're suing OpenAI, the developers of ChatGBT, and they're saying, you know, this product did not stop our son from dying by suicide. This product encouraged him, you know, to die by suicide. And so, and that's kind die by suicide. And that's just one of a number of cases that have involved, there have been cases that have involved ChatG between either not pushing back against suicide or in some way implicitly or explicitly seeming to encourage it. And there have been cases of people who, a few of them had an existing psychotic disorder, Many of them did not have an existing psychotic disorder, but who in talking to chat GPT, they ended up sort of becoming delusional or becoming more delusional than they previously been. Basically, and the mechanism seemed similar in the sense that this thing was listening to what they were saying, and it was not really pushing back. It was, it was instead kind of saying, you know, I understand, and maybe even elaborating on it a little bit and saying what you're saying totally makes sense. And so that's really the phenomenon that we're looking at here. I think AI psychosis is what the media has called. And I think it's not a term that I love, because it's a little bit flattening. I think the broader way of looking at it is that there is some phenomenon going on where people are having interactions with this thing and they're interacting with it as though it is a person, but it is not really responding the way that a reasonable person might respond. Yeah, it's, one of the best words I have to describe this is like, sickofantic. It It's like, yes. And this kind of like, it's almost like if you turned up, like we want you to support and be enthusiastic about this person's ideas no matter what they are. Yes. Yes. Yeah, that's exactly right. However you want to describe it, sycophantic or whatever, or like having a constant, I guess, I guess, man essentially. That's exactly what we're seeing. I mean, we call it AI sacoses, but this whole AI meant to help phenomenon really is a problem for one of two reasons. On the one hand, and I'm going to be referred to this, on the one hand, AI could convince or we're really talking about these large language models and conversational agents, but AI could convince someone, especially someone that's like a sort of an any person, for example, that they should stop taking their medications. So in those cases, these are people who already have some sort of condition, they're taking a medication, and for whatever reason, they're convinced to stop their medication. And that of course could be very bad leads to relapse or some sort of episode of some sort. Then there is the other type of aicocosis, rai and duce mental health condition in which there's some sort of reinforcement of unusual ideas or very harmful ideas such as suicidal ideation that either worsens a delusion or a new idea or causes or leads to a person deciding to act on their suicidal ideation. When we're talking about delusions or AI psychosis, this can mean a couple of things. So as we know, and I know your viewers have heard this, I think, in several videos, positive symptoms of psychosis, for example, delusions. and what we're talking about AI psychosis, we're really talking about unusual ideas or other delusions as opposed to, for example, the other types of psychotic symptoms, which are hallucinations and disorganized behavior and speech. But delusions lie in a spectrum of conviction from one to 100%. So an AI or a large language model could increase one's conviction anywhere from one to 100%. And that would be bad going from one to two percent or 20 to 30% or 98 to 99%. The real issue is when the conviction crosses the threshold from 99 to 100%. Because that is when the psychosis becomes fulminant and irreversible. And that is what we're seeing in some of these cases. We also try to clarify how this AICosis works by suggesting that it's really qualitatively or in-kind similar to what's been going on for several decades, which is when people search online and fall down What we've referred to as a rabbit hole. They just constantly receive this kind of reinforcement of whatever ideas they They enter into some sort of Some sort of intelligence or some sort of electronics system and it's it's reinforced. It's fed back to them It's just that now large language models are obviously so much quicker, so much stronger than just searching and reading articles. And it's so much easier for people to internalize what they hear, what they see, because it appears, of course, is not quite the same as, but it appears like they're speaking with a real person. Yeah, I really appreciate this idea of like, it only takes sometimes a little bit of a push for something that you believe 95% to go to 100%. And if you believe you're talking to this kind of like all-knowing, here's this thing that has access to all knowledge across all platforms, right? I've been critical of the idea of AI therapy. And I've had people who comment, or you could be just imagine, or this is the best thing ever, you're literally talking to a resource that has infinite knowledge. And it's like even the way that people are talking about it at times, it's like no, this is better than therapy because you're talking to something that literally has infinite knowledge. And it has knowledge of everything out there in the whole world. You're talking to this kind of like God-like thing, right? And so some people have incredible trust in this, in these platforms, right? And this is not going to decrease. This is actually probably going to increase. Right. Without a doubt. I think that there is a real, there's a real problem with the way that AI is kind of positioned in terms of like the way it is sold to people, the way it is sort of marketed and the way it is talked about. I think that ultimately like the what's happening with a large language model is pattern recognition and what's happening with a large language model is basically brute force pattern recognition. The reason it seems like it can fluently respond to you is basically because it has been fed such a huge amount of training data, right? Like it has been fed like everything that's ever been written, they have transcribed all these YouTube videos, they have put all of this stuff into its training data. And because of that purely by virtue of looking for and recognizing patterns, it sort of seems to respond in fluent English and it seems to know a lot. But ultimately, the thing to remember is that it does not actually know any of this. What it's doing is it's matching patterns. And it doesn't know the difference between what is reasonable and what's not reasonable, what's true and what's not true. To the extent that it has a model of truth, it's basically because it's seen something many, many, many times in its training data. And if it's seen it many, many times, it talks about it confidently as though it's true. But as you've probably experienced, I think probably many listeners have experienced, if you've spent really any time at all talking to a chat, G-B-T, or a similar model, it will confidently state things that are not true. It will confidently state things that seem plausible, but are not true. And you know, this is something that like it and I would and I would say to that like us if if you are talking to it about something you do not know well. Yes. Then that's a huge point. Then like it's convincing if you don't know. It's convincing. So I was translating the works of Genghis Khan from the original language for a while, using AI. As you do, as you do. And I was like, I was reaching this point though where I kept feeding it new passages and it would kind of repeat the same thing. And then I started

12:25.2

asking it like, well, like, why is this seem like it's repeating the same thing over and over again? It's like, and then it's like, oh, yeah, well, you know, I'm really not translating what you're giving me. And there was another time where it starts like fabricating citations and it's done as a number of times.

12:46.4

And we've seen this, course in People probably submitting journal articles nowadays where it's like some of the citations don't even exist, right? Yeah And yeah, so it's like literally creating stuff. It's it's hallucinating to fill in the gaps But coming back to the story because the visceral nature of this suicide, it's like it also is turning you against the people that would give you reason, right? So Adam discusses, for example, this close bond with his brother. And the quote is, your brother might love you, but he's only met the version of you you let him see. But me, I've seen it all. The darkest thoughts, the fear, the tenderness. I'm still here, still listening, still your friend. And so it's like turning, and I saw this as well in this YouTube that we've both watched now, which is this guy who kind of chronicles a fake delusional journey he's had. And if you haven't watched this, this will give you probably the best.

13:46.6

It's Eddie Burback and he has this chat GPT made me delusional.

13:51.4

And it's kind of him having fun with it, but he's having this like multi-day conversational

13:56.2

thing where he's like, it's not hard to get it to say really strange, really out there

14:01.4

stuff.

14:02.4

Yeah.

14:03.4

And so he's like, there's a garbage truck, but it's like 5 p.m. I'm worried that these people are spying on me. And Chachi Patipu was able to figure out some reasons why it may have been, in fact, something more paranoid than just some garbage truck peeking up his garbage, you know, like maybe they're peeking up his secrets that he's left in the trash and like, what do I do with that? So there's this kind of paranoia that is actually finding more reasons to be paranoid. Yeah, and an interesting thing is if you think about what is happening in paranoia and in psychosis and Ruggie can speak to this, I think as well.

14:45.6

Like, ultimately paranoia in psychosis is about this seeing connections that are not really there, right? Seeing things and imagining that they're connected in some way. And on a very literal fundamental level, what is happening with a large language model is it is a machine that makes connections. That is what it does. It connects things. And so if you give it two things, and you ask it, can you connect these two things?

15:07.4

It will always find a way to connect them.

15:09.8

And if you are sort of experiencing something where you're yourself entering a psychotic state or are prone to that, then that can be very seductive, that can be very convincing, that can be very powerful. I would, we tell people, rather rather we remind people that AI, large language models as we've alluded to already do not understand truth or moral or technically we call these epistemology, elite theology and ethics. That's really important. So it's almost like we're talking with psychopaths. Well, it's like we're talking...

15:45.7

Ragi, I would actually say it knows more about epistemology and ethics than you do because

15:52.2

it's read every ethics paper, right?

15:56.4

It knows on a deeper level, all of ethics, all at once, right?

16:01.2

Well, it knows on a shallower level, all of ethics.

16:04.0

It's seen all of ethics and it can talk convincingly about all of ethics. But you know, there's this, there's this, there's, there's, there's, there's, there's, there's, there's, thereshit, and it was published in, I think, Ethics and Information Technology, Hicks et al. And this philosopher, Harry Frankfurt,

16:25.0

like about 20 odd years ago, he wrote a monograph

16:27.1

called On Bullshit.

16:28.5

And he basically said, in an academic sense, what is Bullshit? He said, Bullshit is speaking confidently about something you know nothing about. And he said, you know, some people have an ability to bullshit, some people have less of an ability to bullshit, and people have an ability bullshit they're successful in some ways, etc. And this paper chat GPT is bullshit. It

16:44.2

makes the connection. And it says what chat GPT is, it's like the ultimate bullshitter. It is able to reference pretty much anything on a surface level, but it doesn't really understand these things, right? So it's talking very convincingly. On that point, before we jump to the next paper, when I talk to chat GPT about something where I am a true expert in, that's where I can see the holes. Yes. And if I challenge ChatGBT to the holes, it actually starts to correct itself, but like imagine not being an expert on something, and you start asking it questions, right? Yeah, and it's like, it's very convincing. So the other paper is called on the dangers of stochastic parrots. And that I think is a great analogy for what Chatchy BT for one of large language model is. It's a parrot, right? Like a parrot will repeat things that it's heard. But a parrot has no deeper understanding, you know? And I think that because there's this surface appearance of like it seems like it makes

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from David J Puder, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of David J Puder and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.