From AI Agents to Agentic Systems: deep dive & the future-- Part 2
- Massimiliano Turazzini
- Jan 25
- 12 min read
In this era of Cambrian explosion of models and AI, many of these are presented as AGENTS, sometimes in a somewhat forced way. I hope you had the chance to read the first part of this post in which I try to explain in the simplest way possible what an AI Agent is and what we talk about when we address the topic of Agentic AI today.
Now I'll proceed in some more details concepts, both technical, both practical or organizational to better understand how this technology will, soon, re-shape our teams.

Let's start with a 'review' of the five levels of automation that we can imagine with agents. This is my classification, made by trying to synthesize tons of material available online.
Starting from level 0, that of the LLM in a standard chatbot, which cannot be defined as an agent until it starts to have tools (access to external memory, web search, other models.)
Summary of levels of automation with AI
I tried to summarize everything in a table in which I simplify, trying not to trivialize, the various levels.

What level is ChatGPT at?
The current ChatGPT and Gemini just in case you're wondering, are a Level 2 AI Agent because it is the LLM that decides to use these tools at its disposal in the context of a conversation.
But if we think about the 'other tools' category, things get interesting.
With GPTs it becomes Level 3, always according to my nomenclature, because it allows us to work with third-party tools.
Claude.ai is instead a level 4 agent when it decides to use artifacts that allow it to generate code, execute it and bring you the results. At the same level ChatGPT when it uses code interpreter to answer you after having made an application as you see here
Generative AI for reasoning
Generative AI with Language Models is essential at this point in time when we talk about AI agents. It is what gives them the ability to reason about the task they are given, to break it down into sub-parts, to evaluate what might be the best way to perform them. Here a sample of a 'simple' thought.
We now have several dozen models that can reason about various topics 'above average', slice the elephant into pieces and start working on it one piece at a time.
Want more complex examples? You can find them here... just to answer the first question it took a few seconds using O1, the version of ChatGPT that dedicates time, precisely, to thinking. In the previous example the reasoning was 'instantaneous'.
But OpenAI doesn't have a monopoly on this!
Deepseek enters the game
Few of you know it, surely, but it is a totally new model that is ousting the great American models. What is special about it? It thinks like O1 (but exposes the reasoning in detail), it is Open Source (and has models that you can install on your phone, notebook, server), it is Chinese (and has a license for which I invite you not to enter sensitive information at the moment, at least not in the public version), it acts, as a level 2 agent, only after a phase of structured thinking, as you can see below.
Keep an eye on him!

And it highlights a key characteristic for higher-level Agents and Agentic AI: the ability to reason.
Important!
The closer the Agent gets to level 5, the more it must be able to 'think deeply' and put together data collected in real time (results of called tools, feedback from other agents in the loop.) Level 4 or 5 agents must have as a base an LLM capable of tackling the task in order to be able to plan correctly with all the information in hand, with related impacts on costs/response times. O1 and Deepseek R1 seem optimal for these. It will only be necessary to understand the cost of running this activity (it will not be low).
The Memory of Generative AI
After the first post of this series was published, you asked me to be more specific and detailed about memory. I'll try.
When talking about memory in AI agents, we can divide it into different levels, each with specific characteristics:
Long-term memory
Static model memory: It is composed of the information learned during training, which remains immutable.
Additional memory based on user-provided documents: Here the model accesses external information through techniques such as RAG (a system that you can think of as a “semantic database”), integrating data from your knowledge base with pre-existing knowledge.
System Instructions: Includes additional information defined by the manufacturer, developer, or strategic storage systems, such as ChatGPT's “memory” function, which can remember personal details (e.g. your name).
Short-term memory
Context Window: Contains information exchanged during the current conversation with the AI, limited to the time and length of the dialogue.
Real-time memory
External Tool Output: Includes data obtained in real time, such as content found on the web, results from linked databases or business systems such as ERP.
Information from other agents: Collects the results of activities performed by external agents, with which the AI collaborates during its operation.

Autonomous interaction
During reasoning, therefore, an AI Agent has access to different types of memory and can decide to use one or more tools or create new ones. But it can also ask other agents in its context to perform actions in which they are specialized. (**Level 5 Agent)
Agents are then able, in the context provided, to evaluate the resources at their disposal, reason and choose which steps to take to solve the problem, including accessing other systems.
They do this based on the level of delegation, or agency, that they are provided with,
What is the Agency?
"To have agency" means to have the ability to act independently, make autonomous decisions and actively influence one's 'life' and the surrounding environment. We can think it as "to have decisional autonomy" or "to have the capacity to act".
As one LLM told me:
"The concept of "agency" implies:
The ability to make informed choices
The ability to act according to one's own will
The power to influence events and circumstances
Self-determination in one's actions
For example, when we say, “She has agency in her career choices,” we mean that she has the control and autonomy to make decisions about her career.
In the sociological and philosophical fields, the term takes on an even deeper meaning, referring to the individual's ability to act as an independent social actor within society."
The concept is that a clear mandate must be given to the agent that considers the context and clear limits.
In the absence of agency, the more 'intelligent' an agent is, the more it will try to solve what is requested without limits, without alignment with us.
And here we return to the example of Bostrom's paperclip maximizer (here is the example from the previous article ) in which an agent with a generic goal: Maximize the production of paperclips, will possibly use all the atoms in the universe to do so, without caring about the consequences.
What do we do with it?
Anyone who uses ChatGPT a lot without automation or well-defined workflows is probably finding themselves doing COPY/PASTE, Window Switching, Mouse Clicking here and there, all 'switching time', wasted time.
For example, to write a launch post for this article, I would:
copy and paste this text into ChatGPT, ask to analyze its content, and extract relevant information
then I should read what it says, maybe start modifying something, and do 2-3 rounds of optimization (then I usually get tired and start rewriting it from scratch, without AI, but that doesn't count)
When I think it's ready, I have an assistant read it, and they give me feedback from a panel of 5 different reader archetypes of mine. Copy and paste, conversation reading, evaluation.
Then, I should open LinkedIn, start the post, arrange everything according to LinkedIn's specifications, take a final look, and click publish.
This is a simple example where the workflow seems to be pre-set. But if I maybe had five different readers read the post first and then reviewed the feedback on the first pass I would produce better content. Designing this into a workflow is not easy at all.
In the video, also seen in the last episode, instead you see how I can do it with an agent that I created in 10 minutes: his job is to help me write better launch posts on Linkedin (no, I'm not really using it, at least for now). He can search for information on the internet, he could publish directly on LinkedIn, and he decides, autonomously, how many and which stepso make each article, to make sure it's good for my readers and revisions t that it's technically correct, etc.
Are they really that powerful?
We are lucky; today, even the most powerful agents have many limitations: they make mistakes and are not up to the level of a human being.
Partly because of the ability of AI models to orchestrate complex autonomous flows, partly because we do not yet have the skills, as humans, to structure them as the story in the next paragraph.
As of today, the situation, as reported by Microsoft (link at the bottom of the article) is this... The human level of accuracy stands out by far. (But I would be curious to update this graph with DeepSeek R1 or OpenAI O3)

I think it's fortunate that there are still low results because, as I write at the beginning of "Hiring Artificial Intelligence in Business", while AI runs very fast we humans have 'standard' times of habituation. We need time to prepare for new things.
But we are headed towards a world where we will definitely see much more productivity with the use of AI. Simply because of a fact of 'brute force' that we can apply to many work situations where human limits are not in cognitive terms but in terms of availability of time and resources.
Letting an agent go for a few hours, not taking care of the cost in economic and environmental sustainability, is not difficult, just expensive and sometimes does not give the results we would like. But that does not mean it should not be done. We will have to evolve our coordination skills, learn to use these very powerful tools, and be ready for the moment in which they will reach and exceed our ability to act.
Multi-purpose generalist agent platforms that 'do things' are popping up like mushrooms: spend some time with them. Understanding requires understanding. The only way to understand today is to practice!
And who coordinates them?
These days, statements like Jensen Huang are coming out saying that IT is the new HR for agents. But why?
Because coordinating these resources is not trivial. As a basis, it is necessary to have clear roles, as I explained in the previous post that actually started this series of contents on Agents.
When faced with a task in the company, think about the simple launch post on Linkeidn described above, depending on the resources available (internal skills, time availability, budget) a manager will give a final objective: "publish a post," leaving the collaborators the task of discovering how to do it best. Without establishing large structured processes the manager knows that he will be able to exploit the skills of his hybrid team and obtain great results.
To do this, he will have to be clear, specific, and perhaps even kind to ensure that his instructions are understandable. Then, he will have to trust his team during the execution and, as per good practice, be able to periodically check the progress to make sure that everything is going as well as possible.
But what if teams start to be composed of humans interacting with AI Agents ? Who will be able to control the AI agents? Do we really want to 'let IT do it' by giving them full agency? Do we really think that the 'average manager' does not need to be able to coordinate resources of this type in a matter of months?
If you had a team of four people to manage your lead flow and 12 highly specialized agents to control, how would your work change? How should your managerial skills evolve? What relationships would be established between human beings and agents?

Who would be able to control that the agents do not 'waste time' chatting among themselves (which costs a lot, a lot) or that they do not start developing code that does not respect the limits you have given them? And what if when people start to be jealous of the agents' capabilities what will happen? How will humans relate to AI agents, what new organizational dynamics will we have to approach?
It takes time to give these answers. We have less and less of them.
I apologize…
I didn't want to raise the hype that is already there about AI. Really! My goal remains to "Enjoy AI responsibly" and I believe it is responsible to also bring my readers, not always technical, not always aligned on what happens behind the scenes, into discussions that insiders are having every day.
I truly believe that this is a real issue to deal with and that every responsible manager should somehow address as soon as possible. Maybe starting to get their hands dirty firsthand, after all it is enough to talk to them and try to keep control. Probably, we will witness increasingly refined and mediated forms of "human-machine collaboration", where automated agents specialize in analysis, reactions and optimizations, while the final judgment remains in the hands of humans (or at least a set of values and procedures established by humans).
However, the increasing autonomy of agents will increasingly raise the bar of what they can decide and do on their own, leaving ample room for future tensions and debates. And knowing how to plan and control them will be a necessary skill.
In conclusion, agentic applications represent the most disruptive side of AI, with both beneficial and dangerous potential. The critical issues — security, transparency, accountability, ethics — will not resolve themselves: they require a synergy between everyday activities, technological research, regulation and a culture of awareness. If we want to avoid serious social and political problems, we must design and manage these agents with obsessive attention to the values and consequences they entail.
You can find my small contribution on how to deal with this in the blog articles dedicated to "Hire an AI in your organization" and in the book in which I tried to structure a framework, AI-PLUG, of practical actions to take.
So what?
We are facing a major point of discontinuity in organizations.
We have conversational bots, which allow us to communicate with an LLM (Level 0)
AI workflows, which allow us to put LLMs into the process loop with or without us humans (Level 1)
AI agents that allow us to talk to traditional software tools or other types of AI (Level 2)
Agentic AI, when we delegate workflows and actions to an LLM able to decide which tools to use, generate and execute code and 'confabulate' to reach a result by deciding the process for them (Levels 3,4,5)
The various levels of agents seen bring us to a state of the art in which we can communicate in natural language with 'entities' that will take care of communicating with other software on our behalf, of creating ad-hoc software in real time and executing it, of deciding how to organize themselves among themselves to give us the result we want.
An example? An agent given all the documentation of an insurance claim might have to understand the text, the attached images and all the PDFs, search for information about the customer from the insurance database, compare the case with the terms and conditions of the policy, ask the customer for clarifications, wait for a response from him, consult with internal product specialists. He can do this for days, without losing focus on his goal and his context.
When Generative AI models are sufficiently capable we will get closer to the concept of AGI, Artificial General Intelligence, as we have never done before. When we evolve Generative AI, which in my opinion is not, at the state of the art, able to be 'on par' with us, then we will reach and surpass this level. igence)
We are facing a moment in which a manager cannot fail to be aware of this new type of resources, neither human nor purely software . There is a role issue, as I mentioned in the last post, that requires new levels of understanding, skills, tools and above all trust (among us and towards these agents) on our part too. And it is not the future. Right now thousands of programmers in the world are working on platforms capable of creating agents. Thousands of people are using them. This is the real transformation that AI can bring to the workplace. And the acceleration is constant.
Probably, within 5 years, agents will be the main users of your business systems, they will converse with each other and with those of other actors in your supply chain, they will represent you, your company, your values.
You will probably be making websites optimized for AI agents to read information : in the same way that you now take care of the User Experience , you will take care of Agent Experience to facilitate the provision of information to agents of the players you interact with. You and your people will spend more time interacting with an agent than you are dedicating today to the various business applications.
And I'm not the only one saying it:
“Transformation is not without its challenges, jobs will evolve, roles will shift, and companies will need to adapt. We will all need to rebalance our workforces as agents take on a larger portion of the workforce, and then we can rebalance and reshape our companies in new ways.”
Marc Benioff. Salesforce CEO
...And because these systems are designed to learn, as they create new workflows, they can also create pathways for future agent requests. That way, every time an agent thinks about a problem, it ultimately has an effect on the business of addressing problems more broadly.
Julie Sweet. Accenture CEO (Accenture Tech Vision 2025)
Otherwise... we'll have to stay and watch other people's paperclip maximizers. (And in the meantime, to write these posts, OpenAI has blocked my account for an excessive number of interactions and tokens used by my agents!)
If you want me to return to the topic of agents, let me know what you would like to explore further.
In the meantime, I'll leave you a couple of additional links:
(Did I mention that you need to invest time? 😀)
Massimiliano
Comentarios