The Full Stack Human is in the meeting room. Three Italian stories
- 4 hours ago
- 9 min read
For years I have been observing and describing how AI is changing the way we work. I am often "forced" to argue the point to make myself understood by those who have not yet been through it - people who look at me surprised, perplexed, worried.

But lately I am also often overwhelmed by enthusiastic accounts from people living this change in ways that are still all to be discovered.
In particular, over the past few days, several pieces of feedback came in very fast - from three different meeting rooms, in three different companies, over three consecutive days.
And I could no longer keep putting off this article that I have been carrying around for months.
Three scenes, one week
First scene. A manufacturing company. A developer on the team - a profile who had always considered himself "vertical" in his language - had to deliver an executable to a client to explain how an algorithm worked. He thought of building a web application but did not know where to start. So he did this: first, a web prototype in thirty minutes with Claude. Then he rewrote it (a full refactor) so it could be distributed to the client more easily. The executable came out practically perfect. Total: 2 hours. Manual estimate: one week of work. "Incredible how he did it," said a colleague during the comparison meeting. He was thinking about the how, but above all about the time and how different it was from what he himself would have done.
Second scene. A marketing manager did four different things in one week with AI. She analyzed some competitor ads together with it. She structured a database to monitor them over time. She built a promotional calendar starting from an internal template that AI returned in the right format. And she started prototyping an internal newsletter filtered by topic. Four jobs that two years ago would have required four different people, or weeks of handoff between them. And obviously she had never programmed in her life.
Third scene. Here I pause longer, because this is the most instructive. A large company. An engineer keeps a diary of the activities we do with the AI Team that is taking shape. I have his permission to quote passages.
Week 1: he discovers outcome-oriented prompting - defining the expected result first, then building the prompt accordingly. He structures his prompts in markdown. He creates dedicated Claude Projects for each operational area. He starts building his agentic environment. Good.
Week 2: he discovers metaprompting (having AI generate optimized prompts for specific tasks). He has AI analyze his writing style to maintain consistency in content. And, while he is at it, he dusts off his data analyst skills and starts training a convolutional neural network on laboratory analysis images for a use case in his department.
Week 3: he introduces the operational distinction between inference and execution (when you need AI's probabilistic output, when you need deterministic code). He prototypes a dashboard with a no-code tool to read a CSV generated from a weekly web scraping of sector regulatory news. He starts a collaboration with the AI colleague from another department, because the topic is shared and there is no point in duplicating work. And he writes one line worth more than a hundred papers:
"The bottleneck is no longer the prompt, but the data."
Three weeks. Starting from basic AI use.
A pattern, beyond the anecdote
Two years ago I described six levels of AI Fluency: the curious, the active, the experimenter, the reflective, the architect, the fluent. It was a map to tell people: do not worry about using AI today, worry about understanding which level you are at for your domain of expertise, and what you need to move to the next one.
Are you starting to see how those levels are genuinely evolving?
Are you starting to imagine how organizations will change if even twenty percent of people begin making evolutions this fast?
Can you imagine how the organization changes? I discussed this in my book "Assumere un'intelligenza artificiale in azienda" when these things were already beginning to take shape.
With the evolution of agentic environments, culture, and data availability - with increasing AI Fluency, in short - people are ceasing to be what they were. And they are becoming something new: Full Stack Humans. Do not worry, no destructive cyborgs here. Just an inspiration that I hope will help us understand things a little better.
The Full Stack Humans
The term comes from Mohanbir Sawhney, a professor at the Kellogg School of Management at Northwestern University. He borrowed it from software jargon: the full-stack developer is the one who can work on both the front-end and the back-end of an application.
The Full Stack Human is the non-technical human equivalent in the AI era: the one who can work with the adjacent skills that AI allows them to acquire, without losing their vertical competence.
Sawhney's definition is easily understood through two words: anchor + radius.
The anchor is the deep competence - the real craft built over years of experience.
The radius is the adjacent skills that AI allows you to expand credibly. It does not make you an expert in everything, but it lets you do everything that revolves around your main competence.
An HR manager summed it up very clearly for me:
"AI does not make you an expert in procurement, sales, or marketing. It helps you consolidate a position you already have."
Exactly. The person from the third scene is a Full Stack Human precisely because their anchor is product safety, and everything else (deep learning, dashboards, web scraping, cross-department coordination) grows around it without replacing the core. Their domain judgment decides what is worth keeping and what is not. Without them at the helm, that CNN would be a toy.
The other paths - wrong ones?
Be careful - there is a misunderstanding risk worth clarifying. The Full Stack Human is not the generalist of thirty years ago, the one who knew a little about everything and nothing well. That profile, in the nineties, was seen as "the one who had not chosen." Today it is different: you start from vertical excellence and from there you expand the radius.
It is not the modern fluff guru who has AI do everything. That is nothing more than a colorful bush that shoots colors everywhere but falls at the first wind...

Depth remains fundamental. Judgment, and taste, beat prompts. Because AI gets you to 70% of the solution, but the final 30% - the decision, the angle, the taste, the rejection of an output that does not convince - that remains human. And without an anchor you cannot even evaluate the 70%. You just make noise (read AI SLOP).
People tend to take other paths that I often encounter, which I would place between these two extremes:
Rejection. The executor who needs clear inputs and deterministic outputs. At the first probabilistic AI output they get irritated, throw away the tool, say "it does not work." In reality it does work - it just works differently from what they are used to. They do not have the time or inclination to explain their work to an AI. Or they are understandably afraid of not knowing it.
Faith. The one who waits for the oracle. They think AI will give them the truth. They delegate judgment. If AI says it, it must be right. If AI does it, it is wonderful. They sigh with wonder at every result their agents produce.
The Full Stack Human goes in a third direction: neither rejection nor faith. Constant judgment. They use AI as a starting point, not a substitute. And they know when to stop.
Why it is happening now
Before, friction dominated movement between competencies: jumping from one role to an adjacent one required months of training, translation, learning. Skills behaved like solids, hard to reshape. AI has drastically reduced this friction. Skills now behave like liquids: they flow between roles that were once separate.
And with them, professions - destined inexorably to modify their form, structure, and scope. As I started describing here.
In the three scenes from this week, what I saw is exactly this: professions changing. A developer who enters a new language in half a day and delivers. A marketer who touches databases, editorial calendars, and newsletter filters without stepping outside her role. A domain engineer who trains a neural network because the use case suggests it, not because someone decided it in an org chart.
This works under one condition only: the anchor must be solid. For those who already have a craft, AI's 70% is 70% of a solution. For those who do not, AI's 70% is 100% of an illusion.
And what about organizations?
This is what is happening inside people. Inside them, in the way they think about their work, in the way they reformulate a task.
The problem is that people work inside organizations.
And organizations, in my experience, are not moving at the same speed. Shadow AI exploding because official channels cannot keep up.
Policies arriving six months after the work has already changed. IT waiting for direction from the business, business waiting for direction from IT. Corporate tools locked down so thoroughly for security that they have become unusable, and the skilled professional keeps their personal account and works from home.
So what?
Three scenes, one name, one condition.
The Full Stack Human is not a theoretical American profile. It is what I see emerging when a person with a real craft genuinely gets to work with AI. An engineer who in two hours enters a profession they had never tackled with awareness and responsibility. A marketing person who does the work of four people. An engineer who in three weeks expands without losing their core. It is happening now, in Italian companies, at a speed I would not have predicted two years ago.
If you have a craft you identify with, you are no longer choosing between going deeper and going broader. You can have both. AI gets you to 70%, you decide the 30% that matters. Under one condition: that judgment remains yours, always. The moment you passively accept what AI proposes, the anchor is dissolving. And without an anchor, the radius is noise.
Let us start getting used to it. To roles that break off like icebergs and become liquid. To people who deliver work they never would have imagined producing in their lives.
To asking ourselves how the balance will shift when the legal department produces a software tool with AI inside and hands it to the sales team to use in negotiations. Or when post-sales looks at a machine's firmware, finds the bug, and proposes the fix to the programmers who wrote it.
People are moving. Their organizations, largely, are not. And the distance between these two things is exactly the problem that deserves its own dedicated piece. That is the topic for next time.
Massimiliano
"The illiterates of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn and relearn." Alvin Toffler
AI times - I thought a recap of questions at the bottom might be useful. 100% AI Made and 100% Human Edited. Let me know if they make sense.
Related questions
What is the Full Stack Human? A professional with deep competence in a specific domain (the anchor), capable of expanding their range of action to adjacent skills through AI (the radius). The term comes from Mohanbir Sawhney (Kellogg School of Management). They always have a strong vertical craft that gives them the judgment to evaluate what AI is returning to them.
Is the Full Stack Human the same as a T-shaped professional? No. The T-shaped profile had vertical depth plus shallow horizontal knowledge: they knew how to talk to other functions. The Full Stack Human does those functions, at least in first draft. The horizontal bar of the T extends thanks to AI providing auxiliary skills on demand. It is a T-shaped 2.0.
How do you become a Full Stack Human? You need an AI Fluency journey - in my framework it crosses six levels (curious, active, experimenter, reflective, architect, fluent), fifteen minutes a day of real practice on your own work, and an environment (corporate or personal) that allows experimentation without being punished for the first mistake. In realistic terms: three years of maturity in three months, if the person is on board.
Does the Full Stack Human replace specialists? Sawhney says it explicitly: "specialization won't die, but integration will become a key advantage." Specialists remain essential for deep domain decisions. What changes is that the Full Stack Human arrives at the table with a credible first draft, and the specialist validates, refines, and decides. The bottleneck shifts from "starting the work" to "deciding well."
Can every professional become a Full Stack Human? Yes, under two conditions. First: already having a solid anchor - a real craft in which they are credible. Without it, the radius becomes noise and you end up in the group of those who "use AI to do everything badly." Second: accepting that their way of working will change. Those who want AI as a "shortcut without changing anything" become the secret cyborg Ethan Mollick talks about.



Comments