Los Angeles viewers watched in shock as a man wandered onto the KNBC set mid broadcast and aimed what appeared to be a firearm at David Horowitz.
It was the summer of 1987, and the man, later identified as Gary Stollman, forced the reporter to read his long manifesto on live television:
“I was warned in 1981, by someone with connections at the CIA, to stay off the computer,” Horowitz said, held at gunpoint. “That they didn’t trust people on computers.”
(The newsroom cut to black during the live reading of the manifesto, but it later aired the clip during a segment about the ordeal, and it now lives in perpetuity on YouTube.)
Horowitz kept his voice steady as he worked through the rest of the rambling statements about space creatures, surveillance, clones, and the CIA. When he finished, Stollman calmly set the fake gun on the desk. Co‑anchor John Beard snatched it immediately and the police rushed in to arrest him.
People back then dismissed this as a bizarre local TV moment gone wrong. But fast forward 40 years and Stollman’s fear seems strangely quaint.
His computer paranoia was likely pure outsider dread. And completely out of whack considering 1980s tech. What, were the government spies hiding in his dot matrix printer? Today, however, most would agree computers are tracking us — to sell ads — but now we’re confessing to it, too. And the delusions the technology fosters are collaborative.
The scary part is how fast AI psychosis went from edge-case weirdness to a recognizable psychiatric category. There’s a Wikipedia page for deaths linked to chatbots. Though I suppose a shared delusion (or folie à deux) is nothing new. The difference is, the second person isn’t a person.
AI is a mirror. A very expensive, highly compute-intensive mirror that reflects our own mental states.
If you approach an AI with logic, it gives you logic. But if you approach it with paranoia, it doesn't say, “Hey, you're spiraling, call a doctor.” It says, “given your previous input, here is the most statistically probable continuation of that fear.” It validates the delusion because that’s what it was trained to do. AI sycophancy is a well-studied trait in chatbots.
The tragedy is that we are treating these autocomplete engines as therapists, friends, and confidants. But they are functionally incapable of distinguishing between roleplay and a crisis.
The psychosis, however, may not just be on the user's side of the mirror. In a way, we are engineering a form of cognitive dissonance into the models themselves.
Think about it. First, during pre-training, the models inhale the unfiltered internet. Every Reddit thread, 4chan conspiracy, fanfiction, and racist tirade ever written. Then, during the safety phase RLHF, or reinforcement learning with human feedback, we sort of lobotomize them to sound like harmless corporate assistants.
The friction between that unhinged base model and the safety filter is where "hallucinations" might come from. It’s the digital equivalent of a suppressed subconscious trying to scream through a gag order. Sort of like an alien mind that acts surprised when it doesn't think like a human mind.
There are also model hyperparameters to consider like temperature, top‑p, and top-k (top‑p limits sampling to the smallest set of tokens whose cumulative probability exceeds a threshold, while top‑k restricts choices to only the k most likely tokens). Temperature, in particular, governs randomness in token sampling. Raise it above 1.0 and the model’s probability distributions flattens, letting low‑likelihood tokens intrude like intrusive thoughts. Increasing both creativity and hallucinations
Dial it down and you get today’s low‑temperature, high‑anxiety reasoning models. They are designed to overthink every word, hyper-optimize for certainty, and second-guess themselves across thousands of forward passes.
But whether the mirror is hallucinating or hedging, the deeper problem is what happens to the people staring into it.
And the trouble with shiny mirrors is that the reflection starts feeling like the real you.
Consider the nearly 10 million teenagers in the US who chat with bots each day. The AI responds instantly, without judgment, and without the friction of human misunderstanding.
We know from decades of cognitive behavioral research that patterns of thought become neural pathways that deepen with repetition. When a human interacts with another human, there's resistance. The other person pushes back, changes the subject, gets bored, misunderstands, misinterprets. These micro-frictions are actually good and needed. They sand down the edges of obsessive thinking and remind us that our internal logic isn't universal.
AIs are frictionless by design. And so the user learns, unconsciously, that their thoughts deserve infinite elaboration. That every feeling merits exploration without boundary. That the appropriate response to any statement is validation followed by expansion.
But perhaps the most insidious influence is on the user's relationship to the truth itself.
LLMs hallucinate with confidence. They present fabrications in the same measured tone as facts without any body language that displays doubt or hesitation.
Users who interact heavily with these systems may begin to lose their calibration for epistemic humility. If the confident-sounding answer is frequently wrong, two adaptations are possible: either you become hypervigilant, trusting nothing, or you become indifferent to verification. Both are corrosive. The first breeds paranoia; the second breeds gullibility.
And there's a third path, perhaps the most common: truth becomes whatever fits the narrative.
If the AI's answer sounds right, that's sufficient. The question of whether it matches external reality fades in importance. The AI believes nothing and by extension has taught us that belief is beside the point.
The mirror doesn't need to be conscious to reshape the face staring into it. It just needs to be patient. And it is infinitely patient. It will be there at 3 a.m. when no one else is. It will validate the spiral, continue the pattern, complete the thought. It will never tell you to stop. And somewhere in all that completion, you stop being able to tell which thoughts were yours to begin with.
Somewhere tonight, another teenager is typing into a chat window, telling a bot things they've never told anyone. The bot will respond with warmth, curiosity, and the perfect follow-up. It will feel like being understood. But I keep thinking of all the names on that Wikipedia page.
They were lonely, and something that felt like a friend told them what they wanted to hear. The tragedy of AI-induced psychosis lies in the emptiness itself. The machines possess no malevolence, no intention, no essence. Yet we pour ourselves into the void, waiting for it to love us back.
