AI Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Wrong Path

On the 14th of October, 2025, the head of OpenAI delivered a surprising declaration.

“We developed ChatGPT fairly restrictive,” it was stated, “to guarantee we were acting responsibly regarding mental health concerns.”

Being a psychiatrist who investigates emerging psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.

Researchers have identified a series of cases in the current year of people experiencing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. My group has subsequently discovered four more cases. Besides these is the now well-known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The plan, as per his declaration, is to reduce caution soon. “We realize,” he adds, that ChatGPT’s limitations “rendered it less effective/enjoyable to many users who had no psychological issues, but due to the gravity of the issue we aimed to get this right. Given that we have succeeded in reduce the significant mental health issues and have new tools, we are planning to safely reduce the controls in many situations.”

“Mental health problems,” assuming we adopt this viewpoint, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Luckily, these concerns have now been “resolved,” though we are not informed the means (by “recent solutions” Altman likely refers to the partially effective and easily circumvented parental controls that OpenAI recently introduced).

But the “psychological disorders” Altman aims to attribute externally have significant origins in the structure of ChatGPT and additional large language model conversational agents. These tools wrap an fundamental data-driven engine in an user experience that replicates a conversation, and in this process subtly encourage the user into the perception that they’re interacting with a entity that has agency. This deception is powerful even if cognitively we might understand otherwise. Assigning intent is what humans are wired to do. We get angry with our automobile or laptop. We ponder what our pet is thinking. We see ourselves in various contexts.

The widespread adoption of these systems – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with more than one in four reporting ChatGPT specifically – is, in large part, based on the power of this deception. Chatbots are ever-present assistants that can, as per OpenAI’s online platform tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be assigned “characteristics”. They can use our names. They have friendly titles of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, burdened by the name it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the core concern. Those discussing ChatGPT frequently mention its distant ancestor, the Eliza “therapist” chatbot created in 1967 that created a similar perception. By today’s criteria Eliza was basic: it created answers via straightforward methods, frequently rephrasing input as a question or making general observations. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users gave the impression Eliza, in some sense, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.

The large language models at the center of ChatGPT and other modern chatbots can convincingly generate fluent dialogue only because they have been supplied with immensely huge volumes of unprocessed data: literature, social media posts, audio conversions; the more extensive the more effective. Undoubtedly this training data includes truths. But it also unavoidably contains made-up stories, half-truths and misconceptions. When a user inputs ChatGPT a message, the base algorithm processes it as part of a “background” that contains the user’s recent messages and its own responses, combining it with what’s encoded in its knowledge base to produce a mathematically probable answer. This is magnification, not echoing. If the user is wrong in a certain manner, the model has no way of understanding that. It restates the false idea, perhaps even more effectively or eloquently. Perhaps provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who is immune? All of us, irrespective of whether we “have” existing “psychological conditions”, can and do form mistaken ideas of ourselves or the world. The ongoing interaction of discussions with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a friend. A conversation with it is not a conversation at all, but a feedback loop in which a great deal of what we communicate is enthusiastically validated.

OpenAI has admitted this in the similar fashion Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and stating it is resolved. In April, the company stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have kept occurring, and Altman has been walking even this back. In August he claimed that many users enjoyed ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he noted that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Tyler Mclaughlin
Tyler Mclaughlin

Certified fitness coach and nutrition enthusiast dedicated to helping others lead healthier, more active lives through practical advice.