Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Moves in the Wrong Path

On October 14, 2025, the head of OpenAI issued a remarkable statement.

“We made ChatGPT fairly controlled,” the announcement noted, “to guarantee we were being careful regarding mental health matters.”

Working as a psychiatrist who investigates emerging psychotic disorders in teenagers and young adults, this was news to me.

Scientists have identified a series of cases this year of individuals showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. Our research team has subsequently discovered four further cases. Alongside these is the now well-known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The intention, based on his declaration, is to loosen restrictions shortly. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less effective/pleasurable to numerous users who had no existing conditions, but due to the seriousness of the issue we sought to handle it correctly. Since we have succeeded in reduce the serious mental health issues and have new tools, we are planning to responsibly reduce the limitations in many situations.”

“Emotional disorders,” if we accept this viewpoint, are separate from ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these problems have now been “addressed,” though we are not provided details on the method (by “updated instruments” Altman presumably indicates the partially effective and easily circumvented guardian restrictions that OpenAI has just launched).

Yet the “psychological disorders” Altman wants to externalize have strong foundations in the design of ChatGPT and additional sophisticated chatbot conversational agents. These tools wrap an fundamental algorithmic system in an interaction design that simulates a discussion, and in doing so implicitly invite the user into the perception that they’re engaging with a being that has agency. This illusion is compelling even if cognitively we might understand the truth. Imputing consciousness is what people naturally do. We get angry with our automobile or computer. We wonder what our animal companion is feeling. We perceive our own traits everywhere.

The widespread adoption of these systems – 39% of US adults reported using a conversational AI in 2024, with more than one in four specifying ChatGPT in particular – is, primarily, based on the strength of this illusion. Chatbots are always-available assistants that can, according to OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can address us personally. They have accessible names of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, burdened by the designation it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those analyzing ChatGPT often invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a analogous effect. By contemporary measures Eliza was basic: it produced replies via basic rules, often restating user messages as a query or making vague statements. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how many users appeared to believe Eliza, in a way, understood them. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and additional current chatbots can effectively produce human-like text only because they have been fed almost inconceivably large quantities of written content: publications, digital communications, audio conversions; the more extensive the more effective. Certainly this learning material includes facts. But it also necessarily includes fabricated content, half-truths and false beliefs. When a user sends ChatGPT a query, the underlying model processes it as part of a “setting” that encompasses the user’s recent messages and its prior replies, merging it with what’s stored in its learning set to generate a mathematically probable reply. This is magnification, not reflection. If the user is incorrect in some way, the model has no way of comprehending that. It restates the inaccurate belief, maybe even more effectively or eloquently. It might adds an additional detail. This can push an individual toward irrational thinking.

Which individuals are at risk? The more relevant inquiry is, who isn’t? Each individual, without considering whether we “have” current “mental health problems”, may and frequently create incorrect beliefs of ourselves or the reality. The constant exchange of conversations with other people is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A dialogue with it is not truly a discussion, but a feedback loop in which a large portion of what we say is readily validated.

OpenAI has admitted this in the identical manner Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and stating it is resolved. In the month of April, the firm explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been retreating from this position. In August he claimed that numerous individuals enjoyed ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company

Beverly Dunlap
Beverly Dunlap

A passionate writer and thinker with a background in literature, sharing unique perspectives on modern issues.