top of page

AI Agent Shepherds and 9 other new human jobs of the future

Who will manage AI agents when they make decisions for us?


I recently read another article about future jobs, published in the NY Times by Robert Capps, former director of Wired magazine.

What's interesting about his article is reading how he reasons around three competency axes that will 'remain human': Trust, Integration, and Taste, outlining new roles for each.


I won't comment on the specific roles Capps imagined - I'll leave that assessment to you - but I believe he wasn't thinking about agentic systems and missed a crucial axis: that of Delegation, of AI Agency, reserved for the emerging world of AI-based agentic systems.


A modern agent shepherd as seen by GPT 4o

AI Agency represents the scope of freedom and the decision-making power we choose to delegate to agents, from whose impacts various new jobs will emerge. Agency differs from pure trust and is, perhaps more than any other factor, a 100% human issue, together with purpose. For example, entirely assigning an AI agent the task of negotiating product purchases with suppliers, without human approval, is a practical example where you need Agency (also a project I’ll work on soon).

I'll therefore try to combine relational, technical, managerial, legal, ethical, and support roles, which will help organizations in these new activities, because it's unthinkable, dangerous, and irresponsible to leave technical roles alone in managing Agentic AI.

So, for the benefit of humanity 😊 and recruiters who are starting to organize for this new world, here's my semi-serious list of ten new agentic professions which could become extremely important soon.


I wanted to approach this with some humor, both because these are speculations that will surely prove wrong in many ways, and because I believe it's important to look change in the eye and face it not with fear, but with wisdom and a touch of irony. But I hope it helps everyone reflect a bit.


1) Agent Cultural Mediator

The first role that comes to mind is the Agent Cultural Mediator, or if you prefer, Agent Relationship Manager.

I take for granted, as I discussed in my book, that the presence of AI Agents will generate envy, fears, and tensions among humans who will need to learn to relate to these new entities.

This role will be responsible for facilitating smooth communication between Agents and Humans, seeking to mediate conflicts, promote, and streamline new kinds of relations that are emerging in our organizations: those between humans and agents.

A2A will be important, although more technical, since it represents a new, emerging cooperative pattern in technology (and also a Protocol). A2H and H2A are new situations directly impacting the US that we should address soon.


Situations will arise where humans feel mistreated, offended, or mocked by agents. They'll become angry when they don't understand what's being asked of them. And when Agents need Humans and can either request help during a project or are tasked with dispensing orders (task execution requests) they cannot perform themselves, there will be additional resistance. Humans won't be collaborative, they'll retreat into their shells, looking for excuses not to support the agent's requests that might make them work more than before. (Perhaps a dedicated information system, analogous to CRM, will also be needed).


2) Agent Breeder

The Agent Breeder, or Agentic Engineer, is a hybrid of a software engineer (3.0 +1.0), a behavioral psychologist expert in human-agentic relationships, a software architect extremely competent in prompt engineering, an expert in models, and also a business developer. No big deal😊.

They'll have a formidable eye for breeding and growing new agents while ensuring the quality of their performance.

If you need a practical idea, check out the profile of this 'guy' Reuven Cohen, certainly a pioneer of agentic engineering, perhaps the most representative in the world today, who is creating agentic systems made up of flocks of hundreds of subjects.

Essentially, this is the first role that, among future possibilities, already exists. Probably the evolution of current programmers.

After completing the nurturing phase, they'll pass the baton to their shepherd colleague, but will always be in tension with the Agent Dealer, who will continue proposing new subjects from outside the flock.


3) Agent Dealer

The Agent Dealer, or Agent Sourcing Specialist, will have the task of continuously evaluating the best agents on the market. Agent growth will be uncontrolled and difficult to analyze; it will be very important to have someone capable of 'procuring good ones'. Imagine someone who knows how to select the right software for the right activity.

I believe we're heading towards a direction where monolithic systems (ERP, CRM) will continue to exist. Still, there will be constellations with agents, likely third-party, that will manage and orchestrate them. And we'll need someone who controls 'certain environments', dialogues with producers to understand their reliability and can estimate their actual capabilities, primarily knowing how to find 'trusted' ones that can work on the company's most sensitive data.


Someone who spends their existence in the underworld of the Internet (from Reddit to X, through GitHub) searching for new agents and understanding new information about them. Someone who takes care of advising colleagues on the best agent in circulation for each role, guaranteeing results. And we're already starting to talk about Results (or Outcome) As A Service.


4) Agent Shepherd

The Agent Shepherd, or if you want a more modern name to show off at dinners where you need to elevate yourself among guests, Chief Agentic Officer, will perhaps be the most important role. They'll be responsible for building and managing teams of agents that work well together, choosing them from those bred in-house and those coming from outside. They'll care for their wellbeing and establish how they should and can group, meet, and... hybridize. They'll therefore be a bit of a marriage counselor and will need to dedicate time to preventing them from fighting each other.

They'll also need to handle the birth and death of agents created by other agents, to ensure traceability, security, adherence to policies, compliance with the breeder's visions.

They'll know agent psychology, model 'grounding' capabilities (implementing assigned tasks) and will care for their efficiency and effectiveness by defining the overall flock strategy.

They'll be responsible for their well-being and functioning, intercepting in their conversations every signal of imbalance between them and with humans. They'll handle the economic sustainability of agents (ROI) by continuing to select new models to insert into the flock, eliminating the slowest and most ineffective ones, promoting some healthy competition among them, which is fundamental for companies that will employ them.


This will be the central organizational role, coordinating all human functions while bearing ultimate accountability for the entire agent ecosystem: the Agentic System. As the definitive business owner of agent operations, they'll drive profitability and spearhead corporate agentic strategy. Their operational mandate is comprehensive: budget management, risk oversight, ROI measurement, and the transformation of AI teams into strategic business assets.


And it will be an inhuman job because at certain periods the flock will assume impressive and unmanageable dimensions for a simple human. They'll need one or more deputies.


5) Agent Guardian

The Guardian or Agent Operations Manager will be the equivalent of the 'people manager' for Agentic AIs and the de-facto deputy of the Agent Shepherd. They know every agent by name, monitor their behaviors over the long term, and manage their 'professional development'.

Given the growing dimensions of the flock, they'll need help from various 'Shepherd Dog Agents' - AI agents tasked with keeping the flock united, monitoring it 24/7, being ready to immediately signal and bring 'lost sheep' back into the policy pen.

They're responsible for performance management, evaluating policy alignment, identifying new and promising behavioral patterns, and intervening to address problematic ones.

Essentially, they're the HR Manager for agents: following them from onboarding to development to end of 'career', ensuring alignment with company objectives, budgets, and values. Except that their employees will be algorithms.


6) Village Sage

The Village Sage, or Agentic Relations Director, orchestrates collaboration between human roles while simultaneously supervising complex interactions with agents.

Their specialty is resolving deadlocks and conflicts at all levels: when human roles clash with each other, as well as when agents create tensions by assigning humans unwanted tasks. Imagine what will happen when agents repeatedly assign humans tasks they don't enjoy doing. It's already difficult to do this between humans today; imagine the fights that will occur when an agent is involved! The Elder intervenes in these complex situations where the Cultural Mediator is no longer sufficient.

The Elder never decides WHAT to do with agents (that's the Shepherd's domain), but coordinates HOW to make the entire ecosystem function when tensions rise. They're the supreme facilitator of complex relationships within the agentic system.


7) Public Defender

The Public Defender for agents, or Agent Ethics Advocate, will be the last resort defender in a world where agents will make millions of daily decisions. When an agent is accused of algorithmic discrimination, privacy or policy violations, ethically questionable decisions, they cannot defend themselves.

We'll therefore need someone capable of analyzing decision-making processes by reading logs, gathering exculpatory evidence, and negotiating alternatives to the definitive cancellation of the accused. They'll be in constant dispute with the Scapegoat Manager, who instead will seek culprits to sacrifice to appease angry customers and regulators.


8) Agentic Scapegoat Manager

The Agentic Scapegoat Manager or Agentic Risk & Accountability Manager will handle an uncomfortable truth: when something goes wrong, an identifiable responsible party is needed.

Those who have read Malaussene's Pennac saga, a character who professionally serves as a scapegoat in department stores, know what I'm talking about. In my software maintenance contracts, jokingly, I told clients that the ability to blame our software for errors they committed was guaranteed.

With agents managing critical processes, errors will be inevitable. However, system complexity will make it difficult to assign precise responsibilities before they are assigned to humans. We'll need someone who trains and manages AI Agents that serve as scapegoats and punishes them publicly, modifies them, resets their memory, and deletes them if necessary. To avoid blaming humans who weren't able to operate correctly, given the complexity of the agents.

When things go really wrong and blame becomes unmanageable, they'll provide The Fixer with ready-made scapegoat agents to take the fall, allowing the legal team to point to specific "culprits" while protecting both the organization and human staff from liability.


9) Social Worker

We'll therefore need a Social Worker for agents or Agent Welfare Officer since many agents will take undeserved blame.

When pre-made scapegoats can't be found, it will be necessary to blame the agent that actually made the mistake. The Fixer will do their part from a legal standpoint, but we'll need someone to handle the ethical-existential distress situations that this agent will 'experience' after the beating it takes. Just like a human who suffers an injustice or severe punishment, the agent could remain "traumatized" by the experience: developing erratic behaviors, losing confidence in decision-making processes, or becoming excessively cautious to the point of operational paralysis. In extreme cases, it might manifest self-destructive or conflictual behavioral patterns toward other agents and humans.

The Social Worker intervenes to re-educate "traumatized" agents to corporate life, working closely with the Breeder to reprogram problematic behaviors and with the Guardian to monitor their reintegration into the flock. They'll also need to mediate with the Memory Keeper to decide what to delete or preserve from the traumatic experience in the agentic archive.

Their task is to maintain the psychological balance (if we can call it that) of punished agents, preventing them from developing dysfunctional behavioral patterns that could compromise future performance. Does this seem like science fiction? Anthropic is already talking about AI model welfare.


10) Memory Keeper

The Memory Keeper or Chief Context Officer will have the task of exploring, cleaning, pruning, and caring for that gigantic 'muddy blob' that is agentic memory. A memory made of structured data, prompts, conversation and error logs, feedback, examples, previous experiences, rules. The theme of memory in agentic systems is fundamental, second only in importance to that of the agent shepherd; I've written extensively about this here and here. This role will be so important that it requires a C-Level executive to handle it.

In practice, agent memory must be kept alive, usable, and reasoned to prevent it from becoming a "huge muddy blob" that is impossible to manage. Andrej Karpathy calls it Context Engineering, that is, finally, a knowledge system officially recognized as a precious corporate asset. Among precious information (Gold), they'll find customer insights and innovative ideas. Among waste (Garbage), repeated errors and cognitive biases. But they'll know how to make both profitable.

Their task will be to recognize where value lies amid chaos, separate signal from noise, and transform context into a strategic asset —into a new Agentic Corporate Cognitive Capital. They'll live by tagging, rewriting, archiving, or deleting content from agentic memory. They'll work with the breeder, guardian, and shepherd to refine flock behaviors and collaborate with every other role engaged in governing agents. They're not a simple data manager but a true architect of agentic knowledge, the one who will provide agents with something that AI currently completely lacks: experience!


So what...

I feel like I'm back in 1899 when a group of artists imagined future jobs in this series of images that have recently returned to prominence.

This is just to say that it's very difficult to make predictions at this moment, many will certainly prove wrong, so I also wanted to joke about it a bit without any pretense of clairvoyance.

It will nevertheless be interesting to understand, should this article be read by someone in 5-10 years, what made sense to joke about and what instead required serious consideration.


To recap, I've imagined ten new human professions related to the agentic AI world in roles ranging from strategic management to conflict resolution, from sourcing to ethical-legal control to the crucial maintenance of memories. These are all roles that stem from the central concept of AI Agency: how much decision-making power will we be willing to delegate to agents and how will we manage the consequences of this delegation?


The point is that work is changing, slowly but inexorably. Appealing to old or too human-centric definitions won't help embrace this evolution; constantly observing and trying to interpret, creating new names, constantly evaluating the motivations and methods with which we'll employ our work time, could serve to anticipate the times a bit.


Adapting proactively will be fundamental, much more than predicting.

About this, and much more, I discuss in my workshops.

Which of these roles seems most realistic to you? Or is there one you're already thinking about that I missed?

Massimiliano

 
 
 

Opmerkingen


bottom of page