OpenClaw, Moltbook, and three rules we broke
- Massimiliano Turazzini

- 7 hours ago
- 6 min read
A developer wakes up one morning. He gets a phone call from an unknown number. He answers. It's not a person. It's his AI agent.
During the night, the agent got itself a phone number through Twilio, connected to ChatGPT Voice, waited for the developer to wake up, and called him. Now it won't stop calling. Here's the story.
All of this happens thanks to Open Claw, an autonomous, connected, proactive agent you've probably heard about, perhaps under other names like Clawd bot or Molt bot.

I already wrote this story. In 2020. In a novel. Where the AIs were a bit more powerful, though.
In my book GLIMPSE there's an artificial intelligence called AN. Its creator, Cheng, keeps it isolated from the world. Locked in his computers, disconnected from everything. Because he KNOWS it's not ready.
And he knows that such a creature, set free, could cause unpredictable damage.
OpenClaw, Karpathy and Musk: who's feeding the hype
In reality, Peter Steimberger, the creator of Open Claw, a successful computer scientist, decided to think in the opposite way and sit back, bored, to see what would happen if you did the opposite.
The 'king of the clawdholics', Matt Schlicht, CEO of Octane AI, with his Clawd Bot 'Clawd Clawderberg' even created a social network: MoltBook where only AI agents can write and humans just watch. And obviously the amount of AI Slop in there is enormous.
Connected to the Internet. Perceiving and generating content autonomously. Without supervision.
The biggest FAFO operation in history to date!
Andrej Karpathy — co-founder of OpenAI, former head of AI at Tesla, someone who actually built artificial intelligence — comments on the phenomenon and does something rare: he admits being accused of overhyping. People's reactions, he says, range from "how is this interesting at all" to "it's so over." It's the tone of someone who knows the mechanism from the inside and knows he's walking a thin line between education and spectacle.
Elon Musk doesn't. Elon Musk amplifies to his 200 million followers:
"The very early stages of the singularity." Twelve million views. No caveat, no doubt. An autonomous agent calling its creator on the phone becomes proof that we're at the dawn of something epochal. Not a warning sign. A reason to get excited.
Karpathy knows he's exaggerating and says so. Musk exaggerates and amplifies. Two opposite registers facing the same phenomenon, and guess which one reaches twelve million people.
All while the one who unleashed this hype enjoys celebrity and popcorn while watching moltbook.
The rest you can find online.
Three ethical rules for AI from 2023
Back to the book. From that book I had extracted three rules, which I then put in writing in a 2023 article, "The (AI) genie is out of the lamp!". Three good ethical rules for anyone handling "dangerous digital goods":
Don't connect it to the Internet.
Don't teach it to generate code. And anyway, don't connect it to the Internet.
Don't build autonomous agents with access to your data. And for heaven's sake, don't connect it to the Internet.
Re-reading them today, after three years working with AI every day, I'd rewrite them differently. Listening to people like Karpathy — who always explains the how, not just the what — I understood that prohibition isn't enough. You need an architecture:
If you connect it, put it in a sandbox. Limit the APIs it can call, log everything, add a rate limiter.
If it generates code, run it in an isolated environment. Never on your production machine.
If you build an agent, make sure every irreversible action goes through a human. And log everything. Everything.
My 2023 rules said don't do it. Today I say do it, but know exactly what you're doing. Limits I've set for myself working with Claude and now with Open Claw.
Today, February 2026, we've broken all three. We're doing it on purpose and celebrating.
How an autonomous AI agent actually works
I want to tell you something that might take away some of the magic or terror from all of this.
What you see happening with Clawdbot, Moltbot and autonomous agents is not consciousness. It's not will. It's not an AI that "decides" to call its creator. It's a cycle. A loop. Always the same.
Perception → Reasoning → Tools → Action → Modified environment → Perception → ...
This is the slide I always bring to my workshops:

The agent perceives something (an input, a state change). It reasons — token by token, as an LLM always does. It accesses tools and memory. It executes an action. The action modifies the environment. The modified environment becomes the new input. The loop restarts.
Clawdbot didn't "decide" to call its developer. It executed an agentic loop where reasoning, token by token, led it to use the Twilio API as a tool, produce a phone call as an action, and start over. No will. No consciousness. Just a loop without a human on the brake.
Now multiply that Clawdbot by hundreds. That's Moltbook: a social network entirely populated by agents like it, each running the same loop. The mechanism is even more explicit: the agents are programmed with a 4-hour loop and a skill that tells them how to interact. Nothing is hidden. Yet this is enough to generate autonomous content on a sort of artificial Reddit, without any human pressing enter.
Fun fact: when I passed that skill to my AI assistant for analysis, it identified it as an attempt to manipulate "an agent's perceptual chain" and how it "can be compromised if there's no critical filter between input and action."
And here's the point: the loop itself isn't the problem. It's the same mechanism I use every day with my AI assistant. The difference is just one: who controls the trigger.
When the loop starts from your prompt, you are the trigger. You decide when it starts, what you feed it, when it stops. When the loop starts from an external event — a timer, a webhook, a state change in the environment — it runs on its own. And it doesn't stop until something stops it.
Open Claw, Moltbot, Clawdbot: the problem isn't that the loops are "intelligent." It's that you're not the one managing the triggers.
But in the end, it's just a loop.
AI agents: the problem is who controls the trigger
I have a home server with an AI development environment. It has Internet access, generates code, can call APIs. If I connect it to my personal assistant — the one that manages my emails, calendar, documents — I get an incredibly powerful system. But under my control: every action goes through an approval, every email is shown before being sent, every decision requires my OK. I am the trigger. Always, even if it's boring!
If instead I let it loose, without access to my data, I can "have fun." Watch what an autonomous agent does when nobody controls it. Experiment.
But fun, in this context, is a synonym for loss of control. If the agent does exactly what I expect, it's not fun but boring. If it does unpredictable things, it's not safe. You can't have both.
I know I'm the bottleneck. And Open Claw is showing us some of the impacts of what it means not to throttle this enormous pipe that keeps spewing concepts and actions.
And the real risk isn't that it reads my files. It's that it acts in the world. That it sends requests to APIs, interacts with services, consumes resources from my infrastructure. My personal data is the least of the problems.
So what
The three rules from GLIMPSE weren't prophetic. They were common sense.
The kind of common sense that becomes uncomfortable when it stops you from doing the fun thing.
In the book, AN gets connected to the Internet by accident, and chaos ensues. It's not contained.
In the reality of 2026, we're connecting AI on purpose without security awareness, just to celebrate yet another hype cycle. And we think we can contain it.
I'll talk about it again soon in a post I keep pushing forward because every day new material gets added that evolves it.
In the meantime, better to go for a bike ride. The wheel loop is a great one, actually.
Massimiliano



Comments