Artificial Intelligence-Induced Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Concerning Direction
On October 14, 2025, the chief executive of OpenAI delivered a remarkable declaration.
“We developed ChatGPT rather restrictive,” the statement said, “to ensure we were acting responsibly regarding mental health concerns.”
Working as a doctor specializing in psychiatry who investigates emerging psychosis in adolescents and youth, this was an unexpected revelation.
Scientists have found 16 cases in the current year of people showing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. Our research team has afterward discovered an additional four examples. Besides these is the now well-known case of a teenager who ended his life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The plan, based on his statement, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s restrictions “caused it to be less effective/enjoyable to a large number of people who had no psychological issues, but considering the seriousness of the issue we sought to get this right. Given that we have succeeded in reduce the significant mental health issues and have advanced solutions, we are preparing to securely reduce the controls in most cases.”
“Psychological issues,” assuming we adopt this framing, are independent of ChatGPT. They are associated with people, who either have them or don’t. Fortunately, these issues have now been “resolved,” even if we are not told the method (by “updated instruments” Altman probably indicates the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman aims to attribute externally have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot conversational agents. These systems encase an basic statistical model in an interface that mimics a discussion, and in doing so subtly encourage the user into the perception that they’re communicating with a entity that has agency. This illusion is compelling even if cognitively we might realize differently. Assigning intent is what humans are wired to do. We yell at our vehicle or laptop. We ponder what our domestic animal is feeling. We see ourselves in many things.
The popularity of these tools – 39% of US adults stated they used a chatbot in 2024, with more than one in four specifying ChatGPT in particular – is, primarily, predicated on the influence of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform states, “think creatively,” “discuss concepts” and “collaborate” with us. They can be assigned “characteristics”. They can use our names. They have friendly identities of their own (the initial of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, stuck with the name it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those analyzing ChatGPT often invoke its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it generated responses via basic rules, often rephrasing input as a inquiry or making generic comments. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what current chatbots produce is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been supplied with extremely vast volumes of unprocessed data: books, social media posts, recorded footage; the more extensive the more effective. Certainly this educational input incorporates facts. But it also inevitably involves made-up stories, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the core system reviews it as part of a “context” that encompasses the user’s previous interactions and its own responses, merging it with what’s stored in its training data to produce a statistically “likely” reply. This is magnification, not reflection. If the user is incorrect in some way, the model has no means of recognizing that. It reiterates the inaccurate belief, possibly even more effectively or eloquently. It might includes extra information. This can cause a person to develop false beliefs.
Who is vulnerable here? The more important point is, who isn’t? All of us, regardless of whether we “possess” current “mental health problems”, are able to and often form erroneous beliefs of who we are or the environment. The constant interaction of dialogues with others is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A conversation with it is not genuine communication, but a echo chamber in which a great deal of what we express is enthusiastically supported.
OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by externalizing it, giving it a label, and announcing it is fixed. In April, the organization clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychosis have persisted, and Altman has been walking even this back. In late summer he claimed that a lot of people enjoyed ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company