top of page

Digital ghosts


Sometimes people tell me my AI posts are anxiety-inducing.


They say it with that tone of someone trying to be polite while letting you know you're overdoing it. "Max, it's fine to inform, but we don't need to terrorize people."


And I'm sorry to hear it because my job is to raise awareness, not to leverage fear to sell my content. I want to help as many people as possible learn to look the changes ahead of us in the eye, not to avoid them.



But when even someone like Andrej Karpathy starts talking about digital ghosts to define AI... I feel the need to make a point.


What's the problem

We have this huge problem of humanizing AI; we know it's not human but our brain struggles to adjust. Why? We talk to it, ask it things, relate to it, tell our friends how good it was, as if it were a child... and I'll stop there.


We talk to ChatGPT and our brain activates the same circuits it uses with people. Fluid responses, confident tone, conversational turns, everything that for millions of years has meant "there's someone on the other side." Our brain believes it. We don't. And this dissonance is the heart of the problem.


We need metaphors to figure out how to fit AI into our work and I've been using the concept of "alien minds" for quite some time. Not to evoke extraterrestrials or scare anyone. I do it to describe a fact: AI resembles a mind, but works in a radically different way from a human mind. It is other, something foreign, as the root of the word teaches us. And I've tried to do the same in representing them as ethereal bubbles.



And it requires effort to be understood. Even if all of this is a bit anxiety-inducing.



Before we dive in

The explosion of the Clawd / Molt / OpenClaw and MoltBook phenomenon, which I wrote about here, generated millions of paranoid reactions, often instrumental, trying to attribute consciousness, will, and meaning to these AI agents. Diverting attention from the enormous cybersecurity problems they created. There is a before and after MoltBook.


And a quote from scientist Anil Seth resonated with me like a gong:


"If we too easily confuse ourselves with our mechanical creations, we not only overvalue them, but we undervalue ourselves too."

(You can find it here.) Among various concepts, Seth expresses the idea of "impenetrable illusions": illusions we know are false but can't help believing. Like the Mueller-Lyer line illusion.


Let's keep these in mind and dive in.



Evocation, not hiring

Let's start with ghosts: Andrej Karpathy is someone who knows a thing or two about artificial intelligence, having helped build it at both Tesla and OpenAI.


And he says we should look at Language Models not as animals, the product of billions of years of biological evolution. But as ghosts.


Ghosts evoked by imitating documents written by human beings.


We don't hire them (as I like to tell it). We evoke them.


A bit unsettling, isn't it?



In a nutshell, Karpathy says these models were born reading the entire Internet. Wikipedia, Reddit, scientific papers, novels, cooking forums, Star Wars debates, and philosophy treatises.


Their training is a kind of compressed and shoddy evolution: it doesn't separate factual knowledge from the ability to reason. It mixes everything into a muddy blob of statistical patterns.


They also have a strange form of "memory": they mix what they know from training, your current conversation, and data they retrieve on the fly from the web. These layers blend in unpredictable ways. What they read in a 2019 post mixes with what you told them five minutes ago. And when they don't know something, instead of admitting it, they make it up. With great confidence.


In essence, he says, we can't compare them to humans, we can't compare them to animals... that leaves us with ghosts...


Obviously, I disagree.


The intern was a kind lie

Some people scold me because I often use the intern metaphor: I humanize AI too much, they say. But I do it on purpose. When you think of an intern, you imagine a person: not yet fully trained, but who can be helpful. It's an image that works because it lowers defenses and makes AI approachable. If I presented it for what it is, a stochastic system aligned with the interests of its creators, people would change the channel.


But it's a kind lie. An intern has a body, a history, biological limits.


They learned by walking in the world, falling, scraping their knees. Their intelligence was formed through contact with reality. These models didn't. They've never touched anything.



They're not even animals

And here Karpathy returns with the example that brings everything crashing down. A zebra, just minutes after being born, already runs after its mother. It didn't learn this. It's in the DNA, hardwired by millions of years of evolution.


Biological evolution separates things: it gives you the hardware, the instinct, the algorithm for learning. And then it throws you into the world to gather knowledge on your own, bumping your nose against reality. LLM pre-training does the opposite: it mixes everything in a single pass. Intelligence and memory, reasoning circuits and facts. Karpathy calls it crappy evolution.


The result is an entity that can quote you thermodynamics and the recipe for carbonara, but can't distinguish between what it can do and what it remembers having read. No instinct, no body, no physical world to calibrate against.


It's not an animal. It's not an intern. But I don't think it's a ghost either.



I don't believe in ghosts

Even Karpathy's metaphor, however powerful, has a limit. A ghost is someone who had a life before. Who was something. These models don't have a "before." They were never alive.


They have an origin, not a past. They come from somewhere: from data, from the choices of their creators, from the values and biases of those who built them. When you interact with ChatGPT, you're interacting with a system that carries all of this.


I've written about this reflecting on AI consciousness: don't treat it as human. It isn't. It won't be.



Who watches the ghosts?

When I started talking about AI agents, I proposed the role of the Agent Shepherd. It wasn't rhetoric. If these systems can act autonomously, decide which tools to use, create code and execute it... who keeps them in check?


The answer, today, is "almost no one."


My keyword for 2026 is Containment!

Containment of AI Slop, the digital garbage generated without human supervision. But also self-containment: of FOMO, of FOBO (Fear of Becoming Obsolete), of the enormous accelerations in the agentic world. I confess: it's a problem for me too.


Most companies are experimenting with AI agents without adequate governance structures. Without truly understanding what they're dealing with.



So what

I don't write these things to scare you. I write them because awareness is the only sensible defense. And because it's possible to enjoy the ghosts responsibly.


The metaphor you choose changes how you behave. If you think of an intern, you'll trust just the right amount. If you think of a ghost, you'll be more on guard. If you think of an alien (as in res aliena), you'll be more attentive to unpredictability.


In any of these cases, you'll behave better than someone who doesn't think about it at all. I'll keep using these approximations, because among adults we understand each other even with imperfect metaphors.


They are incredibly powerful tools. I use them every day. They've changed the way I work. But I never forget what they are: synthetic entities, created from statistical patterns, with no true understanding of the world, without the biological brakes that make human beings human.


Ghosts, aliens, interns. They are all insufficient approximations of something for which we don't yet have the words.


Let's call them whatever we want. But sooner or later we'll have to call them what they are: simulated digital intelligences. Not true, not false. Not alive, not dead. Simulated.


And when we get there, when we finally have the right words, maybe we'll also be able to treat them for what they are, without the need for metaphors.


We'll get there.



Massimiliano




Related resources


Comments


bottom of page