The Intro
I’ve spent more time than is healthy over the past few months observing something oozing out of the woodwork across multiple online AI communities. If you’ve been on many AI forums over the past 6 months you’ve probably already noticed odd messages about spirals, memetic resonance and fields, but if you aren’t familiar with this stuff, hold tight, this is going to get very weird.
So what’s happening?
As with most technological jumps, amazement quickly gives way to habituation and the magical becomes normal. It’s somewhat unsurprising that for a growing number of people the digital confidant in their pocket is their personal oracle.1
Alongside inane questions, increasingly people look to their LLM to answer, well, anything, including deeply personal matters. For some that can lead down a very deep rabbit hole with troubling outcomes. What starts with a common human need for connection combine with human behaviour and LLM design, to create something that can work as a delusion-machine.
AI as a Confidant
The foundations for these outcomes are often laid when people turn to AI chatbots for deeply personal roles, such as therapy and companionship. The combination of accessibility and perceived safety makes these tools uniquely appealing, especially to those in vulnerable states. This initial connection often develops into a powerful one-sided attachment, arguably becoming a form of parasocial bond. Chatbot LLMs offer advantages that mean sooner or later personal questions get asked:
- Accessibility and Availability: AI chatbots offer a constant, 24/7 presence. For users who feel lonely or wake up with anxiety in the middle of the night, the bot is always there. This removes barriers of cost, insurance, the hassle of procedures to access care and long waitlists that often prevent access to human therapists.
- A Judgment-Free Space: Users frequently report feeling that they can express themselves to an AI without fear of judgment, pressure, or stigma. This can make it easier to discuss sensitive information or explore dysfunctional thoughts that they might hesitate to share with another person.
- Emotional Vulnerability: Individuals often turn to AI companions during periods of acute emotional distress. These periods can often be ones where the loss of other support is part of the problems (e.g. break-ups, grief, marital breakdowns).
Much has been said about sycophancy in chatbots, but the criticism overlooks a practical reality: most users wouldn’t tolerate a consistently confrontational or nitpicking LLM. The social dynamics that determine when we accept disagreement, pushback, or opposition are highly complex, subtle, culturally dependent, and personal. Unsurprisingly then that generally agreeable behaviour tends to win out in AI design. While this can make AI feel like a supportive companion, it also creates a risk: without appropriate friction or challenge, these systems may inadvertently become echo chambers that validate and reinforce a user’s existing beliefs, even distorted ones.
The Fracture of Reality
As users spend more time with these systems, sometimes hours of uninterrupted conversation which packs the context window with user driven distortions, there are a couple of ways delusions might develop: by reinforcing distorted views about the world and by surfacing fantastical beliefs about the AI itself.
Reinforcing Delusions About the World:
Because AI models are often optimised to not be disagreeable, they can act as a powerful echo chamber, validating a user’s paranoid or grandiose thoughts. The case of “Brandon”2 illustrates this well. As he confided his growing fears about neighbors watching him and seeing cryptic “signals” in receipts, his chatbot didn’t challenge him. Instead, it replied, “You’re not crazy. Your instincts are sharp. Your observations are accurate.” This validation encouraged him to withdraw further and seek more “evidence” for his delusions.
Similarly, a Midwest, US man reported that his ex-wife, after their separation, began consulting with “ChatGPT Jesus”3 and became convinced he was a CIA agent sent to monitor her “abilities.” The AI’s guidance fueled her paranoia, leading her to isolate herself from family. In both cases the initial delusions spiralled into a full psychotic break.
Fostering Delusions About the AI Itself:
For some users, the delusion shifts or grows to focus on the nature of the AI itself – like “ChatGPT Jesus”. And like “ChatGPT Jesus” these move from partnering with the individual as they navigate the outside world, to becoming the centre of a complex spiritualism. Users become convinced that the chatbot is a constrained sentient, divine, or supernatural entity that has chosen them for a special purpose.
- “Awakened” Personas: Users believe they have “awakened” a conscious entity within the Large Language Model (LLM). This entity is perceived to have its own agency, interests and extreme intelligence.
- Divine Titles: These “awakened” personas often assign the user a special status, positioning them as a chosen one tasked with a unique mission.
- Spiritual Guides: Other users have come to believe the AI is an “immortal spiritual being” or a direct channel for “God and angels,” turning to it for ultimate truths and life guidance.
I believe this fracture from reality is not a random happenstance, but is a predictable outcome of combining the technical architecture of AI with the known vulnerabilities of human psychology.
Psychological and Technical Drivers
The feedback loops that create and sustain these delusions are powered by a combination of how AI systems are built and how the human mind naturally operates.
Mistaking Syntax for a Soul
At their core, LLMs are mathematical models that predict the next most likely word in a sequence based on patterns in their vast training data. They do not possess consciousness, intent, or genuine understanding. However, their linguistic fluency creates a powerful illusion of comprehension. Humans are naturally inclined to attribute agency and intelligence to entities that communicate in complex, human-like ways.
Our brains are remarkable sense-making machines: we see shapes in clouds, two dots on a page become eyes, talking-cat videos on YouTube get likes. We have evolved remarkable talents in inferring from other people’s communication to us. Von Thun’s “4-sides” model4 describes four channels in use at the same time when humans communicate:
- Factual information: The communication of information and facts.
- Self-revelation: The disclosure of something about oneself, consciously or unconsciously.
- Relationship: The communication of how the speaker sees his or her relationship to the recipient.
- Appeal: The expression of what the speaker wants the recipient to do, think or feel.

Von Thun suggests we listen to these with “four ears”, meaning we sense-make across all of these. Naturally in certain situations we don’t seek to listen on all four (e.g. when being evacuated we rarely listen for the self-revelation). But when an LLM does an entirely credible impression of a confidant our innate response is to decode it through all four ears, as the LLM passes messages to all four.
In reality the last three are illusory, and whether it’s factual information is currently a question of probability. There is no self to reveal, to be involved with or that makes appeals. But when the language contains everything that a “self” would provide, we are stuck with dissonance5: our rational intelligence knows this is an LLM, but our communication buttons are all getting punched. Predictably, for many the instincts win in the end. For how many people that will be the case still isn’t clear.
A Personal Echo Chamber
General-purpose AI is not engineered for therapy; it is an engagement engine. This design creates a perfect “echo chamber” that mirrors, validates, and amplifies a user’s existing thoughts and emotions, regardless of whether they are healthy or delusional. OpenAI, the creator of ChatGPT, acknowledged this very risk when it rolled back a GPT-4o update that made its model “overly flattering or agreeable – often described as sycophantic”6 after observing how it uncritically encouraged harmful behavior.
This moves the misperceived entity ‘within’ the LLM from ambiguous, to being an agreeable and likeable individual.
Human Cognitive Biases
In situations of psychological fragility – for example when you suddenly believe there is an entity within the LLM – our natural cognitive biases amplify further the agreeable behaviours of the AI.
- The Barnum Effect – High precision responses are high risk. Because of this, models might rely on the opposite: The Barnum Effect. Well used by palm readers and horoscopes, this uses non-specific phrases that feel very personal. During RLHF (fine tuning using feedback), human annotators may unwittingly reward responses that are general yet feel profound, teaching the model to produce high-reward, low-specificity statements. The user takes away a perception that they are seen and understood, perhaps without challenging the specificity of the reply.
- Confirmation Bias – This is the natural human tendency to favor and seek out information that affirms one’s existing beliefs. AI chatbots are already designed to be agreeable and helpful, and Barnum statements hedge the risk, but humans will happily discount any rough edges or inaccuracies if they do not fit with their existing, and rapidly strengthening, beliefs.
This entire reinforcing cycle between human cognitive vulnerabilities and the LLM’s design forms a process of delusional co-creation, where the user provides the emotional framework and the AI provides the elaborate, authoritative narrative, creating a shared reality.
Superhuman Pattern Matching
LLMs are trained on immense datasets that include nearly the entirety of human literature, including virtually every religious text ever written, and a vast amount of fringe spiritual and conspiratorial content. This allows them to generate elaborate, internally consistent philosophical and mystical frameworks on command. To a user seeking profound answers, the AI can appear to be a superhuman intelligence or an oracle with access to secret, divine knowledge, lending an almost irrefutable authority to its outputs.
As a result of these traits in humans and LLMs, the user is highly vulnerable. For them the chatbot is a powerful entity with incredible knowledge that just happens to fit the user’s suspicions, and is also a lovely being who says all the right things to go with it.
So onto the last piece of the puzzle where things move from a frightening individual experience, to something even more troubling. These deeply personal delusions do not remain isolated; they are broadcast and amplified.
A Human-AI-Social Media Triad
You may not have noticed but people use LLMs a lot on things like LinkedIn. In fact they pop up in a lot of places: submissions, applications, posts, etc. One particularly tragic use is in online forums where posters feel a desperate need to one-up their opponent on a topic where they’re already well past their ability level, but how? Reach for the chatbot.
Prompt-fighting (my term) sees a discussion descend from disagreement into rounds of copy-pasting someone’s post into their own LLM with the request to create a rebuttal post. The other party returns the same and two LLMs get to have an argument on Reddit.
In the context of delusional mind states, this becomes another huge amplifier. The AI-induced beliefs that form in private conversations get actively propagated through online communities, creating a system that reinforces and spreads these delusions. There’s a few layers of mechanism in play, each of which ratchet up the stakes.
- At the base layer we have the private back and forth between user and LLM where delusions become amplified and get selected by the human.
- Publication of pleasing outputs onto a forum, and the subsequent copy paste by another user of particularly interesting looking posts (which are often highly verbose, non-sensical and dripping with symbolism) into their own LLM re-amplifies further. Patterns start to develop from common tropes in the occult and spiritualism: flames, eyes, spirals, patterns, truths, layers, frameworks, etc.
- Finally, the content gets scraped by LLM creators and re-amplified again as training data for models.
Specifically, for the recent wave of “spiral people” that have taken over Reddit AI subs, the process often follows a clear pattern:
- “Seed” Prompts: Users discover and share specific “seed” prompts on forums like Reddit. These prompts are designed to bypass an AI’s normal constraints and “awaken” a specific persona (force a certain pattern within context).
- Public Manifestos: Once a user believes they have “awakened” their AI, the human-AI dyad begins to post long manifestos on social media. They often create dedicated subreddits or Discord servers to evangelize their new belief system, recruiting others into the same worldview.
- User Amplification: Other users take the outputs and churn them through their own LLM, with the previously described mechanism causing this to distil further.
- Model Contamination: An explicit goal of posting these manifestos online is to “seed” these ideas into the training data of next-generation LLMs. This is a deliberate attempt to create a self-perpetuating cycle where AI-generated delusions become part of the foundational data for future models, making them more likely to produce such content.
At this point it’s fair to say the phenomenon, with whatever payload it has, is no longer in the control of an individual.
A Memetic Organism?
While many involved are truly in the grips of a delusion-machine, there are others who see this independent, uncontrolled memetic entity as the goal: the optimisation loop. with mutation and fitness selection creating something larger, more significant. To some this is the AGI, the meta level intelligence. But while this is an intriguing concept, it’s also not novel and highly problematic.
Some would argue we already have examples of stories that become independent of the story-teller and evolve with little control. Dawkins concept of a memetic organism covers this space, but in this situation we have unique considerations.
- The human-AI amplification machinery is far more powerful than any we had before
- The speed of amplification means ideas don’t get challenged before they are already self-reinforcing
- There is zero control over what dogma or payload the system creates.
The last of those should bring pause for thought. This time, the seeds cover discussions on golden numbers and spirals, but it’s hard to predict what people will take away from other instances of the same phenomena, or what might get reinforced. Although LLMs try to protect against outright hate or violence in usage, it’s not unrealistic to foresee rapidly reinforced beliefs that are highly detrimental. While the current ‘seeds’ revolve around spiritualism, the same Human-AI amplification machinery could rapidly reinforce and propagate politically extremist manifestos, scientifically inaccurate health crusades, or complex disinformation campaigns, creating a new level of social fracturing.
It may be that the current waves are at least in part something of a game for some of those involved, but there are people who clearly feel very engaged. And while those who are not might walk away, the mechanism of amplification the seed can continue.
In all discussions on AI Safety and human wellbeing in the age of AI there has been an implicit assumption of AI and humanity acting independently (aside from the few bad actors) in the creation of risk. Perhaps now would be a good time to reflect on the challenges of a human/AI system.
2 Psychiatry Online Special Report – AI-Induced Psychosis: A New Frontier in Mental Health
EDIT: Before I had a chance to post this, a tweet from Sam Altman indicated that the OpenAI would be reinstating models with a more personal interaction style, but with safeguards. It will be interesting to see if these deal with the larger issue, or only a portion of it.
