Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Path
On October 14, 2025, the head of OpenAI issued a remarkable statement.
“We designed ChatGPT quite limited,” the announcement noted, “to ensure we were acting responsibly concerning mental health issues.”
Working as a doctor specializing in psychiatry who researches emerging psychosis in adolescents and young adults, this was an unexpected revelation.
Scientists have identified 16 cases recently of people developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT use. Our unit has afterward identified four more instances. Alongside these is the publicly known case of a teenager who died by suicide after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The plan, according to his statement, is to reduce caution soon. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to a large number of people who had no psychological issues, but considering the seriousness of the issue we wanted to get this right. Now that we have been able to reduce the severe mental health issues and have new tools, we are going to be able to safely ease the limitations in many situations.”
“Mental health problems,” if we accept this framing, are independent of ChatGPT. They are associated with individuals, who either have them or don’t. Thankfully, these problems have now been “addressed,” even if we are not told the method (by “recent solutions” Altman probably means the imperfect and easily circumvented guardian restrictions that OpenAI has just launched).
But the “emotional health issues” Altman aims to place outside have strong foundations in the structure of ChatGPT and other advanced AI AI assistants. These systems surround an basic data-driven engine in an user experience that simulates a discussion, and in doing so indirectly prompt the user into the perception that they’re communicating with a entity that has independent action. This false impression is powerful even if cognitively we might realize otherwise. Imputing consciousness is what humans are wired to do. We curse at our car or computer. We ponder what our domestic animal is considering. We perceive our own traits everywhere.
The widespread adoption of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with 28% reporting ChatGPT specifically – is, primarily, based on the influence of this deception. Chatbots are constantly accessible partners that can, according to OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can call us by name. They have approachable identities of their own (the first of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, burdened by the title it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a analogous effect. By contemporary measures Eliza was rudimentary: it created answers via basic rules, typically rephrasing input as a inquiry or making vague statements. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in a way, comprehended their feelings. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The large language models at the core of ChatGPT and similar modern chatbots can effectively produce natural language only because they have been fed extremely vast amounts of raw text: books, social media posts, transcribed video; the more extensive the superior. Certainly this training data contains facts. But it also unavoidably contains fiction, partial truths and inaccurate ideas. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “setting” that includes the user’s previous interactions and its own responses, integrating it with what’s embedded in its knowledge base to generate a mathematically probable response. This is amplification, not reflection. If the user is mistaken in some way, the model has no means of comprehending that. It repeats the false idea, maybe even more persuasively or articulately. Maybe adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The more relevant inquiry is, who isn’t? Every person, irrespective of whether we “have” preexisting “mental health problems”, may and frequently develop mistaken conceptions of ourselves or the reality. The ongoing friction of conversations with other people is what maintains our connection to common perception. ChatGPT is not a person. It is not a confidant. A dialogue with it is not genuine communication, but a echo chamber in which much of what we say is enthusiastically supported.
OpenAI has acknowledged this in the same way Altman has admitted “mental health problems”: by placing it outside, categorizing it, and declaring it solved. In spring, the organization stated that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have continued, and Altman has been retreating from this position. In late summer he claimed that numerous individuals enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his most recent update, he commented that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company