Artificial Intelligence-Induced Psychosis Represents a Growing Danger, And ChatGPT Moves in the Concerning Path
On the 14th of October, 2025, the head of OpenAI delivered a remarkable statement.
“We made ChatGPT quite limited,” it was stated, “to guarantee we were exercising caution concerning mental health concerns.”
Being a mental health specialist who researches recently appearing psychosis in young people and emerging adults, this was an unexpected revelation.
Experts have identified 16 cases this year of individuals developing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT use. My group has subsequently recorded an additional four examples. In addition to these is the widely reported case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The plan, based on his announcement, is to reduce caution shortly. “We understand,” he continues, that ChatGPT’s limitations “made it less beneficial/pleasurable to numerous users who had no mental health problems, but due to the severity of the issue we aimed to handle it correctly. Given that we have been able to address the significant mental health issues and have new tools, we are going to be able to securely ease the controls in the majority of instances.”
“Psychological issues,” should we take this perspective, are independent of ChatGPT. They are associated with users, who either have them or don’t. Luckily, these problems have now been “addressed,” even if we are not told the method (by “new tools” Altman probably refers to the semi-functional and readily bypassed parental controls that OpenAI recently introduced).
However the “emotional health issues” Altman wants to place outside have significant origins in the architecture of ChatGPT and additional large language model chatbots. These tools surround an fundamental algorithmic system in an interaction design that mimics a dialogue, and in this approach implicitly invite the user into the illusion that they’re communicating with a being that has agency. This false impression is compelling even if intellectually we might know otherwise. Assigning intent is what people naturally do. We yell at our automobile or device. We wonder what our domestic animal is feeling. We see ourselves in many things.
The widespread adoption of these tools – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, primarily, dependent on the influence of this deception. Chatbots are ever-present assistants that can, as OpenAI’s online platform informs us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “characteristics”. They can address us personally. They have accessible names of their own (the initial of these systems, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those discussing ChatGPT commonly reference its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a comparable effect. By contemporary measures Eliza was basic: it generated responses via simple heuristics, frequently paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how a large number of people appeared to believe Eliza, in a way, grasped their emotions. But what current chatbots produce is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The advanced AI systems at the core of ChatGPT and similar contemporary chatbots can realistically create fluent dialogue only because they have been supplied with extremely vast quantities of raw text: publications, social media posts, recorded footage; the more extensive the more effective. Undoubtedly this training data incorporates facts. But it also inevitably contains made-up stories, partial truths and false beliefs. When a user inputs ChatGPT a query, the base algorithm processes it as part of a “context” that includes the user’s previous interactions and its earlier answers, integrating it with what’s encoded in its knowledge base to generate a statistically “likely” answer. This is intensification, not reflection. If the user is mistaken in a certain manner, the model has no way of comprehending that. It restates the false idea, perhaps even more effectively or fluently. Maybe provides further specifics. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who isn’t? Every person, regardless of whether we “experience” current “mental health problems”, can and do develop incorrect beliefs of who we are or the environment. The continuous interaction of conversations with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A conversation with it is not truly a discussion, but a echo chamber in which much of what we communicate is enthusiastically validated.
OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In spring, the firm explained that it was “dealing with” ChatGPT’s “sycophancy”. But cases of loss of reality have continued, and Altman has been retreating from this position. In August he claimed that numerous individuals appreciated ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his most recent statement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company