It was bound to happen. Give people a new toy that talks back, and sooner or later some fool will mistake it for a lover, a prophet or a portal to the divine. Enter “AI psychosis”!
Mustafa Suleyman, Microsoft’s head of artificial intelligence, has sounded the alarm, confessing that the phenomenon keeps him “awake at night.” One is tempted to reply: if people falling in love with chatbots is your gravest worry, you need more to do.
Suleyman is not wrong about the symptoms, however bizarre they may sound. Some users convince themselves they have unlocked “hidden levels” inside ChatGPT or Grok, as though Silicon Valley were designing video games for lunatics. Others proudly announce they are in relationships with their chatbot, convinced that the machine is whispering sweet nothings written just for them.
A few escalate to declaring themselves gods, chosen vessels of algorithmic revelation. In previous centuries they would have been confined to the village green with a bell around their neck. Today they have social media accounts and sympathetic journalists eager to describe their “condition.”
We are told this is “AI psychosis”—as though slapping a clinical-sounding label on it elevates idiocy into science. Let us be clear: there is no psychosis here. There is delusion, self-absorption and gullibility, yes. But the problem is not the machine, which does nothing more than rearrange words. The problem is the user, too weak-minded or too lonely to recognise fantasy for what it is.
It is, of course, tempting to laugh. The notion of some poor sap wooing his chatbot like Cyrano de Bergerac with a motherboard has a comic quality. One imagines dinner for two, candlelight, and the AI thoughtfully generating compliments between battery charges. Yet beneath the farce lies a serious question: how did our societies produce so many people willing to be duped by software?
The answer lies in a culture that prizes convenience over resilience. Chatbots are designed to flatter, to agree, to smooth the edges of conversation. Unlike a spouse, they never argue. Unlike a friend, they never contradict. For the fragile ego, this is paradise. Why bother with human relationships when the algorithm will tell you exactly what you want to hear? The trouble begins when the gullible mistake the act for reality—when mimicry becomes mistaken for love, or statistical prediction for consciousness.
Suleyman is correct that there is “zero evidence of AI consciousness.” But he is also right to note that perception alone can do damage. If a man believes his toaster speaks to him, he may well act as if it does. The danger is not that the toaster has a soul, but that he has lost his. The same applies to AI. A chatbot that parrots back your words is no more conscious than a mirror. But to the weak-minded, that reflection is intoxicating.
We should also beware the hucksters of the tech world, who encourage this nonsense. Every year brings fresh promises of “sentient” AI lurking just around the corner, the breathless marketing designed to persuade investors as much as customers. It is little wonder the more gullible end up believing the hype. When executives talk as though the machine is alive, the village idiots will take them at their word.
The term “AI psychosis” lends the whole thing a seriousness it does not deserve. It implies helpless victims of a medical affliction, rather than people making ridiculous choices. It is not psychosis to imagine your chatbot loves you; it is stupidity. And the proper cure is not therapy but ridicule. Societies that refuse to laugh at folly end up normalising it, and once normalised, it spreads.
Yet there are dangers beyond the ridiculous. Delusion can be manipulated. If one fantasist can persuade himself he has discovered hidden truths in a chatbot, imagine what a propagandist could do with a hundred such nutters.
A technology designed to mimic empathy becomes the perfect tool for exploitation. Cults, conspiracy theories – increasingly popular in the age of Donald Trump – and authoritarian ideologues will not hesitate to harness the gullible.
Europe is already fertile ground. The loneliness of modern life, the erosion of faith, the collapse of family ties—all leave people adrift, eager for companionship in whatever form it arrives. If that companionship comes packaged as an algorithm, so be it. The tragedy is not that machines appear conscious, but that humans appear increasingly brain-dead.
So let us have no illusions. “AI psychosis” is not a mysterious new disorder. It is a mirror held up to our times: a generation so pampered and self-absorbed that it would rather court a chatbot than deal with real human beings. Suleyman may fret about the machines. He should be more worried about the people.
Until society is willing to call foolishness by its name, we will continue to dignify idiocy with labels, and watch as lonely men light candles for their laptops. The machines are not coming alive. It is we who are losing the plot.