top of page

2026? The Year of AI Containment

  • 16 hours ago
  • 7 min read

The rules I set for myself to not get lost when working with agents



This morning I worked for two hours. Then I closed everything and went out on my mountain bike. No anxiety. No thoughts about what the AI was doing in the background, no urge to check my phone to see if an agent had finished.


Two hours. Done. Clean.


It used to be very different. And it took me a while to get here.


As I've been telling my colleagues and in my workshops for a few months now, I decided to dedicate 2026 not to agents, robots, or OpenClaw. But to containing agents. And now I'll explain why.



The friction that's gone


There was a time when working — not just with AI — required a ritual. Turning on the computer, opening the terminal, setting up the environment, sitting down at the desk. The friction was physical, real: you had to be in the right place, with the right tools, at a dedicated moment.


That friction was protecting you, and you didn't know it.


There's the physical friction I'm talking about here. And there's another type of friction, more subtle: the cognitive kind, and the kind between colleagues working with AI at different speeds. But that's a different chapter.


Today those limits have disappeared. And I'm not just talking about ChatGPT from the couch. I'm talking about systems like Claude Code, Cowork, Open Claw, which work directly in your development environment. They write code, test it, commit it. They send emails, create documents, modify files while you do something else. AI is no longer a chatbot you ask questions to: it's an autonomous colleague working in parallel with you. In fact, it works even when you're not there.


Tools that are now completely pervasive and connected to mobile through various 'remote' or 'channels' systems, through which — by spinning up many agents in parallel — you can work anywhere, at any time. No more need to be at your desk, turn on the computer, open the terminal. AI is always there, ready to work, ready to produce. And in return, it asks that you keep watch over it.


I've compared it to the e-cigarette: born to help people quit smoking, it became the way to smoke everywhere. AI was supposed to make you work better. Instead it made you work everywhere, always — even when you're not actually working.



When friction disappears, so does the boundary. And when the boundary between work and life disappears, it's not that you work more: it's that you never stop. Not really. Because there's always an agent to launch, another iteration to run, another automation to trigger. AI doesn't get tired, it doesn't tell you "that's enough for today", it doesn't close up shop at six. On the contrary: it sends you a notification at eleven at night to tell you it's done, and you open the laptop "just to check".


The problem isn't the tool. It's that the tool has removed the only things that kept you from working all the time: the effort of starting and the need to carve out dedicated focus time. Friction was a guardrail, a boundary, a protective barrier. Without it, you're free to work all the time. But you're also free to never stop.


AI's output is not yours


There's something many people pretend not to notice when it comes to AI output: what it produces is not yours. It's stuff made by a third party. Like a text written by a ghostwriter, like code written by a freelancer, like a slide made by an intern.


The problem is that it doesn't look that way. It looks like yours because it came out of your computer, because you asked for it, because you wrote the prompt. But, especially if you didn't spend time on the design thinking, it isn't. It's alien text. Alien code. Alien actions.


In English they call it "slop": output generated without oversight, without real intention, without anyone taking responsibility for it. And in 2026 slop is no longer just a badly written article from ChatGPT. It's code that no one has read. It's entire applications generated in an afternoon that no one knows how to maintain. It's emails sent by an agent with your name at the bottom but words you would never have chosen. It's solutions to problems that AI itself created for you, in a self-referential loop that would be comical if it weren't your actual work.


Slop has left the chat and entered the world. It does things. Sends messages. Writes code that writes more code. And the unsettling part is that most people produce slop and don't know it. They think they've written an application, when instead they just clicked "approve all" on an agent that didn't understand what they wanted to build.


This is why removed friction is dangerous: not only do you work all the time, but you produce stuff that isn't yours while thinking it is. A double illusion: you believe you're working all the time, and you believe you've produced something. Neither is true.


The real risk is getting lost


AI's mistakes — you see them, correct them, work on them. The real risk is something else entirely: getting lost.


Here's how it goes: you sit down in front of the computer without a clear idea. You open the terminal, launch an agent, give it a vague task. The agent starts, generates files, asks you to confirm things, you approve without reading. Meanwhile you launch another one. Two hours later you have forty new files, three open branches, an email sent that you didn't reread, and zero things you actually understand.


You didn't work. You delegated into the void.


It's the same mechanism as infinite scrolling on social media, but dressed up as productivity. Because at least with scrolling you know it's wasted time. Delegation without direction gives you the illusion of having done something. In fact, it gives you the illusion of having done a lot, because the files are there, the code compiles, the email was sent.


Now combine that with the absence of friction: you can delegate into the void anywhere, at any moment. The agent works while you sleep. And before you've even decided to start, the agent has already produced. Already done. Already delivered. You just don't know what.


I only start what I know I can finish


This is the rule, and it's just one.


Before sitting down in front of AI, I already know what I want to achieve. I already know how much time I have. I already know what I consider "done".


If I have two hours, I do something that wraps up in two hours. If the project takes a week, today I do the piece I can close today. Not "start working on it". Close something.


Because the point isn't productivity. The point is peace of mind. If I don't close something, I stay hanging. If I stay hanging, I check my phone at dinner. If I check my phone at dinner, I'm working at dinner. And at that point I'm neither working nor having dinner.


Putting friction back. Deliberately. Not because the tool forces it on you, but because you decide when it starts and when it ends.


My containment rules


Over the past few months I've developed a small set of rules. These aren't rules for AI: they're rules for me. AI doesn't need discipline. I do.


1. Define the outcome, the result, before you start. Not "I'm working on project X". But "I'm writing the introduction to article Y" or "I'm finishing function Z". If you can't name the result, you're not ready. And if you're not ready, the agent won't save you: you'll get lost faster. In fact, you'll get lost at industrial speed, because an agent without direction produces output regardless. And output without direction is slop.


2. If you can't finish it, don't start it. Better to do one small thing and close it than to start three big things and leave them half-done. Half-finished things take up mental space. Finished things don't. This rule existed before AI, but with AI it's become urgent, because AI gives you the illusion that you can do everything. And the illusion that you can do everything is the best way to finish nothing.


3. AI's output is a third-party draft until you make it yours. Read it. Rewrite it. Cut it. This applies to text, but even more so to code, to emails, to any action an agent takes in your name. If you don't put your hands on it, it's not yours. And if it's not yours, don't publish it, don't send it, don't deploy it. Slop doesn't become gold just because you put your name on it. And in the agentic world, slop has legs: it walks, it acts, it sends messages to real people.


4. Close the session. When you're done, close. Not "just one last look". Not "I'll launch one more agent while I'm at it". Close. The next piece of work deserves its own session, with fresh ideas and a newly defined deliverable. This is the friction you put back on yourself: the deliberate act of closing. Especially when the tool never closes.


5. If you can't close, write down what's missing. Sometimes you can't finish. It happens. But at least write down exactly where you left off and what's needed to close. That way the next session picks up from where it should, not from scratch.


So what...


This morning I went out on my mountain bike with no anxiety. Two hours of work, everything closed, mind free. Some agent was working in the meantime, but I disabled notifications and focused on my ride.


The rules you set for yourself are not there to limit you. They're there to free you. Because real productivity isn't how many agents you have running in parallel. It's how many times you close the computer knowing you're done.


That's why 2026 for me is the year of AI containment. The tools are already incredibly powerful. The agents are already here. The question now is deciding where they end and where you begin.


Massimiliano

Comments


bottom of page