top of page

AI Fluency: the never-resolved issue in the relationship with AI

This post is a LONG personal reflection on the growing discomfort I have felt in my evolving relationship with AI, an exploration that begins with confusion and slowly moves toward clarity. I share it here as a way to speculate not only on what AI can do, but also on what it does to us.


There was something in my relationship with AI that I was missing. It wasn't easy to get there, and I probably still haven't seen the whole horizon, but only a small portion.

But I slowly began to focus on a feeling that had been with me for a long time, and I would like to share with you, in this lengthy post, where I started from and where I arrived.

There has been so much, too much noise in these years, and stopping to reflect, with a clear mind, has not been, and still is not, easy at all.


Amid this general enthusiasm, something didn't add up...


TL;DR

  • The noise and FOMO around AI make us lose sight of what really matters.

  • Automating is useful, but the real leap is to “think with AI,” not just use it as a digital hammer.

  • “System 0” is the cognitive bridge where the human mind and AI exchange ideas, with the risk of mental delegation.

  • To move around here, you need “AI Fluency”: knowing how to interact with AI while maintaining control and a critical sense.

  • Six levels of maturity (from Curious to Fluent): without fluency, the AI leads; with fluency, we lead.


When AI Gets Out of Hand

Co-pilots, increasingly high-performance assistants, AI-based systems and apps have sprung up like mushrooms (from Gamma to Suno, from Elevenlabs to Heygen, up to Manus) that have increasingly raised the bar, always asking for more knowledge, generating more and more FOMO, the fear of being left out, and continuing to change the starting point, creating increasingly higher bars but selling them as 'solution-oriented and magical'.


I wrote the book "Hiring an AI" precisely with the aim of finding a signal in the deafening noise of these years, focusing immediately on giving the message of "understanding" AI rather than simply knowing how to use it. And then, knowing how to use it, is something that no one, at this moment, can measure: all it takes is a change of model and we're back to square one.


And I have always tried to experiment on myself, firsthand, what I considered most useful but also most dangerous , to realize, in a sort of accelerated path, where we were going. To try to connect the dots and create trajectories.


I spent a lot of time trying to understand if I was a Taker or a Giver with AI. That is, if I was improving my cognitive abilities or I was 'outsourcing' them to the model in question . And now it comes naturally to me to think about this step every time I use Generative AI to do something serious. At the end, I dedicate a few moments to understanding if I have gained or lost something by using it. (Try it!)


With the companies and the various projects carried out, I have tried to dampen the initial enthusiasm of those who give AI magical and alchemical properties by trying to show that there are no secret recipes or quick steps to get to what everyone wants: saving, earning more time, and money. Because in the end we are simple beings and, apart from a few enlightened moments in which we think about our cognitive evolution, our well-being, our responsibilities, we spend most of our time with AI looking for solutions that give us advantages.


Keeping this equation in mind

HUMANS + AI > HUMANS (or AI) ALONE

However, as time passed, I realized that something didn't add up in this overly enthusiastic narrative.


Is AI just for automating boring things?

And what's better, we're constantly told, than giving AI to automate repetitive and boring tasks? Taking away from us those activities that we have to do by force and have no value.

Is this really the goal? Is this really the great possibility of AI? It seems extremely reductive to me.


With the arrival of Generative AI, with the ability to prompt and explain at length, in an increasingly detailed way, with a tool that is increasingly 'attentive' to what is asked of it, capable of showing us points of view that we have never had , access to knowledge that required a lot of work anyway, are we really sure that we simply want to do only automation?

In my journey, I have tried, and with me many others, to understand what the right relationship with AI was, what the roles were and how it was 'right' or 'better' to behave with this strange new tool with unexplored and mysterious properties.


In the book " Hiring Artificial Intelligence in Business" (Still available only in Italian), I wrote a long analysis on the fact that AI (Generative) has practical implications on the organization, on the corporate climate, on processes, on the way we make decisions. The metaphor of the AI intern by Ethan Mollick has been and still is a beacon for me in recent years to make sense of the relationships that are being born with this new technology.

But the relationship has always been HUMAN + TOOL = IMPROVEMENT (net of the problems of cognitive delegation described in another post )

And its ability to 'replicate' (albeit limited, unintentional, opposed by many philosophers who do not consider it noble enough to be able to relate to our precious human brains) has made it, little by little, something other than a simple tool.


And the initial crux of the title was that I was asking myself the wrong question : not "What can we do about it?" but "How can we reason about it together?"


Thinking with AI

Now I think I've moved on to a new state, where I've understood that it's not enough to use AI, it's possible to learn to think and work with it. And this changes the balance of the relationship a bit.


The Hammer

Let me explain: with a hammer, we define its purpose and uses: it is completely passive and does what we ask it to do, and that's it. (We get the most feedback if we hit our finger or miss our aim.) It's a one-sided relationship in which only we have intentions, purposes, and understanding.


The Smartphone

With the advent of smartphones and PCs, we have taken a step further: we have expanded our cognitive capabilities with memory, calculation, and, if we extend it to social media, even modified our identity. It is a step that, if we speak of cognitive extension (Clark & Chalmers, 1998), could be defined as a relationship of symbiotic interdependence . That is, without it we seem to lack some faculties (this is not the place to also talk about the enormous problems that this has brought to the human race).


Generative AI

Then came the systems based on Generative AI, from simple AI assistants (ChatGPT to be clear) to much more advanced agentic systems (combinations of linguistic models -LLM- with traditional computer systems). Systems that perceive the portion of the surrounding environment we give them at their disposal, our inputs, sensors, and various events, elaborate strategies through planning, have memory, and learn from the results. Then they perform actions.

For simplicity, I will refer to them in different ways in this post; each has its own limitations and impacts. I will not use only "Generative AI" nor only "Agents" because both show different nuances.


Even keeping in mind that Agents only simulate intentions, emotions, preferences, and rationality, I don't think we can talk about simple symbiosis here.

A bilateral relationship comes into play with a system that is able to react, to change and modify our way of thinking and acting.


We reason with the agents, ask for advice, express doubts, and try to find solutions to the most disparate problems.

Sometimes we are also 'activated' by them in 'Human in the Loop' processes that start from other triggers.


And this changes us, it modifies our way of thinking about how to solve problems, it opens us up to many new perspectives but it opens us up to many personal risks including excessive cognitive delegation (outsourcing our critical thinking) which, little by little, without us realizing it, can atrophy our ability to reason.


A short note on atrophied memory

The most advanced systems, such as ChatGPT, are also starting to keep track of our memories in a more or less understandable, transparent, and editable way, as I wrote here.

And a lot of applications are popping up that, with various devices, will increasingly keep track of our every daily interaction (Plaud, Limitless, Rewind, and other more or less invasive ones).

And this is an upgrade of the level of delegation that we have already done, in the last few years, by stopping to memorize phone numbers after they are present in our phones. Here AI remembers for us, and we do not always have access to how it remembers it... I will talk about it shortly in the article I promised.


What I discovered

In recent years I have begun to try to understand these mechanisms fully and I have found several studies, which are finally starting to tell a bit of the behind the scenes, net of the fluff and marketing of the various model manufacturers (who dispense solutions to problems that we didn't have before and that they often created).


And, thanks also to the growing group of people with whom I am discussing the uses, effects, problems, and opportunities of this AI relationship, I have arrived at two fundamental concepts that are more related to each other than they seem: System 0 and AI Fluency .

Two concepts that I would like to tell you about first, briefly, and then I will go into more detail, also in subsequent posts.


System 0: A New Shared Cognitive Space

System 0 (Chiriatti et al, 2024) represents an 'exchange zone' between the human mind and artificial intelligence.

In it, there is an invisible bridge that connects our thoughts with the inference of AI, with what it generates. Like a silent colleague, who gives us ideas, proposals, shortcuts and sometimes we let them decide for us (voluntarily or involuntarily).


According to Chiriatti, the concept of System 0 fits between the two systems of thought described by Daniel Kahneman (2012):

  • System 1 (intuitive and fast thinking)

  • System 2 (slow, reflective reasoning).


Unlike traditional physical or technological tools, GenAI tools stand out for their immediate and intimate impact on cognitive processes , becoming an active component of thought itself , not just an external support.

Vibe Working between me, Chat GPT & Canva
Vibe Working between me, Chat GPT & Canva

And this bridge continuously and fluidly transfers mental elements such as attention, ideas, intentions, words, and reasoning patterns.


Only when these reach our minds do we have to process them.


And when they leave our minds, they go towards AI, and there is a risk that they will never return.


Flow is positive when new abilities come to us, and when it is negative, it is like a hemorrhage (Nigel P. Daly, 2024) and reduces our ability to think, atrophying the brain.

From the link to the article by Nigel P. Daly
Dal link dell'articolo di Nigel P. Daly

Assuming that you are at least approximately in agreement, how can we recognize this process? How can we realize if we are receiving and evolving or giving and regressing?


If System 0 is the new playground, I believe we need a new skill to navigate it: AI Fluency.


AI Fluency: The Skill That Changes Things


AI Fluency can be defined as the ability to interact effectively with AI while maintaining cognitive autonomy .


It is a term that emerged more or less simultaneously starting from 2020, with varying meanings, and was recently taken up and disseminated by Anthropic following a study (Dakan & Feller, 2025) on the consequences of using AI.

There are still different definitions, such as for Vibe Coding or AI Agents; it doesn't matter: what matters is that we have a reference name to explore new concepts. And of course, my version will also be partly different from the others, but that's okay because in human evolution, it takes many minds and time for a complex concept to perfect and evolve.


AI Fluency is the term I have been using for a few months in my workshops, and it is the term I will use from now on to redefine the concept of the relationship between humans and AI.


And it's a different concept from AI Literacy , which only represents literacy: it describes, at different levels, how much we know about a tool and how much we are able to use it .

As I was saying, AI is no longer, in this digression, a simple passive tool. It has become a cognitive tool : it has no intentions, it is not perfect, it is not authentic, and it is certainly not human.

And yet, it changes us . More than a symbiont like the smartphone.

For this reason, it is no longer enough to talk about literacy .


When we talk about relationships, there is social intelligence among humans.

But with AI, you need something else: you need AI Fluency . A new form of 'cross-species intelligence,' if we can call it that.

I will try to talk about it without too many academic references that would otherwise require long further investigations.


But what does it mean to 'interact effectively'?

In essence: when we humans relate to each other, we have more or less certain expectations, biases, stories, sensations, emotions that condition what we can ask of our counterpart, that make us set limits on requests and statements, often so as not to 'offend the sensibilities' of the counterpart. And this requires certain types of intelligence.


For example, among humans we have a clear concept of what is doable . If I ask a human to write a book in a day, with all the kindness and money in the world, I probably won't get a good result. So, probably System 1 helps me to self-censor and... I don't ask because I know I can't get a result.


We are effective, among humans, when we are specific in our requests and in the expression of our needs or our offers, and we are reciprocated. For example, a manager who manages to obtain an alignment with his vision from his collaborators and they follow him, learning what he wants and how he wants it.


When we interact with AI, things are different.

I often notice in my workshops - I have now met and trained more than a thousand managers, entrepreneurs, professionals - that we turn to AI for requests that are very much calibrated on human relationships . We anthropomorphize it too much!


They are, understandably, very much based on the experience we have and little on lateral thinking and technical understanding.

In essence, also due to poor AI literacy, we have misaligned expectations between what we imagine AI can do and what it actually does .


Maybe we take it for granted that she knows how to summarize an entire book, capturing the essential points for us (overestimation, it takes more than a few techniques and specific knowledge to do so) or we simply ask her to correct one of our texts (underestimation, she is able to rephrase it in a thousand different tones and ways, also explaining to us why.)


In essence, we look at AI with 'old and wrong' eyes and tools, and this makes us see only a small part of the whole. And makes us make a lot of wrong judgments.


Having AI Fluency with AI means understanding, for each model or system we interact with: what it is able to do and what it is not, and, at the same time, how it is changing our minds while we use it . And getting answers and proposals in line with our expectations.


It's a bit like when we learn a new language: at first we parrot sentences we've learned by heart, then we start to understand their structure, then we start to think in that language and we use it to express complex ideas, make decisions and, very difficult, we start to tell jokes only when we are immersed in the culture of that language.


Except that AI Fluency is not about language but ability, it's a skill that takes root in our brain (in system 0?) and modifies it inexorably. Both positively and negatively.

An essential difference? The transfer is towards us, not towards the AI, which does not change in the face of interactions with other humans.


The only exception, an important issue nonetheless, occurs when the AI system has access to external memories that allow it to modify its behaviors also based on the metadata collected during interactions with us.


Why is it important?


We are the ones who win or lose.

We are the ones who get better or worse.

And, if we have AI Fluency, we have more control.


As I have been telling students for a few years now, using AI to cheat on tests or to learn or improve your skills requires more or less the same effort. The long-term results are very different.


I was thrilled when I found confirmation of this vision in several papers cited by Ethan Mollick (available at the bottom of the article), which support this direction, confirming the dilemma: AI helps when used to learn, but harms when used to cheat, as it atrophies cognitive abilities.


This judgment ability manifests itself at 'high AI Fluency' levels (which I will try to quantify shortly). And in all areas in which it is applied, including and especially in the workplace, it influences the results obtained.


I'm realizing that I have low AI Fluency in some areas of AI (In the world of graphics, video, audio), I don't know how to have the right expectations, which I usually find reversed when I'm telling a novice team how I use AI daily.

And this has concrete implications in life, in study, and in work.


If we don't have AI Fluency, we don't drive, it drives


If we don't have AI Fluency , we won't be able to:

  1. understanding when to use or not to use AI systems,

  2. wisely entrust these tools to others,

  3. commission a project based on Generative AI with awareness ,

  4. recognize whether we are training or atrophying our cognitive abilities.


If we have it, however, we will be able to:

  1. Choose tools and models with clarity,

  2. Rethinking workflows by integrating AI into the way we co-create,

  3. Don't be intimidated and don't be afraid to try,

  4. Bring AI into processes as an active member that supports us in decisions, not as a simple shortcut,

  5. Create a new level of language in the team, more precise, shared, and generative,

  6. Clearly grasp limits, risks, ethical aspects and biases,

  7. Truly build an AI-powered organization,

  8. Constantly read the direction of the cognitive flow between us and the AI.


I know this almost sounds like a tantric enlightenment path but... this is my experience so far in the journey.


And it's not just about individuals: we can also talk about collective AI Fluency . I've seen many teams make huge progress working together on AI, also thanks to the AI-PLUG framework that I described in my book . A tool designed specifically to increase the AI Fluency of an organization, not just of one person.


Let's get to the end, at least of this step, trying to quantify this AI Fluency a little.


The 6 Levels of Maturity

I tried to condense some of these levels, probably oversimplifying, but I would like to leave them as a starting point. I did not want to talk about scores or benchmarks, probably measuring AI Fluency will be far from an easy task.


Many are working on it, and I am also trying to define it as best as possible, also because a non-trivial little problem is that it varies based on the domains to which we apply it. AI Fluency is not absolute in the AI world, but it applies to your work domain, the set of tools you have available, and the situations you have to solve.


So one scale I'm working on is this:

  1. Curious - Initial stages of discovery and wonder

  2. Active - Regular use for small tasks

  3. Experimenter - Workflow Integration

  4. Reflective - Critical Thinking Development

  5. Architect - AI-driven Process Design

  6. Fluent - Complete mastery and guidance for others

Do you recognize yourself in any of these?


The idea is that making good prompts is different from working with AI while training judgment. Pure technical knowledge is great, but with this technology, the way it interacts with us, there is a strong risk of becoming dependent on it. To technical skills we should therefore add the skills and knowledge that allow us to know when to use it and when not to, when to stop, when to doubt.

AI fluency levels chart with colored sections: Curious, Experimenter, Architect. Text: "Making good prompts ≠ Training judgment."
A summary made with my lack of Fluency with graphical tools

Both Anthropic and Ringling, much more authoritative than I, are working in different directions that I will not go into now. But that, in my opinion, may be too complex. So I will return soon to the topic of measuring AI Fluency, grafted with cognitive reasoning on System 0.


So What?


So, what can we take home?

  • That relating to AI is a question of mindset, not just knowing how to use tools, as we do, for example, with Excel.

  • That training, awareness, and discernment are needed. Perhaps the real revolution is not technological, but cognitive.

  • That, perhaps, before making big projects, in organizations it is worth raising the level of AI Literacy and AI Fluency. A group of people who know how to collaborate with AI, even in non-repetitive activities, but to solve everyday challenges, is certainly a more interesting group than one that does not know how to do it. Therefore, in organizations, we should look for group AI Fluency.


I would be extremely arrogant if I told you that I 'found the key' to interact with AI systems in a perfect way. And if I had found it... Maybe I wouldn't give it to you 😀.


I will continue to think about how to improve the thinking behind AI Fluency, how to measure it, and in the meantime, despite the FOMO mentioned at the beginning that is held high by a lot of actors, I suggest you stop a bit and reflect while you interact with AI .

Maybe in a few months I'll change and we'll change our minds about everything again. It doesn't matter.


I'll need a lot of feedback and discussion to move forward.


Does this AI Fluency exist for you? Have you experienced it? Let me know, even privately.


Should we continue talking about this, or should I focus on finding the mythical AI ROI? 😀


Massimiliano



References and further reading



Notes on ME and AI with this post

AI Fluency also means being transparent. I've translated this (And all previous) article in Wix and refined it with Grammarly. The original has been written in Italian with very, very low AI usage (I'm unable to delegate AI to write my thoughts).

Comments


bottom of page