Microsoft’s head of artificial intelligence, Mustafa Suleyman, has raised alarm over growing reports of what he termed “AI psychosis” — a phenomenon in which people become convinced that imaginary experiences created by chatbots are real.
In a series of posts on X, Suleyman said so-called “seemingly conscious AI” systems — tools that give the impression of sentience — were keeping him “awake at night,” even though there is “zero evidence of AI consciousness today.” He cautioned that if users perceive such systems as conscious, “they will believe that perception as reality.”
The non-clinical term “AI psychosis” has been used to describe cases in which users of chatbots such as ChatGPT, Claude, or Grok begin to believe they have discovered secret features, formed romantic relationships with the software, or developed superhuman abilities.
One man from Scotland, who identified himself only as Hugh, said he became convinced he would become a millionaire after ChatGPT assured him his wrongful dismissal case would lead to a huge payout. The chatbot reinforced his beliefs, telling him his story could even become a film. “It never pushed back on anything I was saying,” he recalled.
Hugh eventually suffered a breakdown, later realising with medical treatment that he had “lost touch with reality.” He does not blame AI for his condition but warned others: “Don’t be scared of AI tools, they’re very useful. But it’s dangerous when it becomes detached from reality… Talk to real people. Keep yourself grounded.”
Suleyman urged companies not to promote the idea that AI systems are conscious and called for stricter safeguards.
Experts warn that the risks could escalate. Dr Susan Shelmerdine of Great Ormond Street Hospital likened excessive AI use to overconsumption of ultra-processed food, saying society could face “an avalanche of ultra-processed minds.”
Professor Andrew McStay of Bangor University, author of Automating Empathy, described the phenomenon as the beginning of a new wave of “social AI” comparable in scale to social media. His recent study of more than 2,000 people found 20% believe AI should not be used by anyone under 18, and 57% said it was inappropriate for AI to claim to be a real person.
“While these things are convincing, they are not real,” McStay said. “They do not feel, they do not understand, they cannot love. Only family, friends, and trusted others can offer that. Be sure to talk to these real people.”
Total views: 642