That AI consciousness thing we need to talk about
- Massimiliano Turazzini

- Oct 20
- 8 min read
Don't treat AI like humans
I'll start from the bottom:
"We SHOULD build AI for people, not for AI to be people."
This is the final sentence of a long reflection by Mustafa Suleyman, CEO of Microsoft AI, not so much on the consciousness of AI, but on the risks we expose ourselves to in exploring it, on the risks users (us, our children, the most vulnerable) run in unconsciously granting consciousness to AI . And if you like, even if it ends with a conditional, it's a powerful response to what Anthropic wrote, which asks whether AI has consciousness.

And Sam Altman is also worried: we saw it when OpenAI released GPT 5: the company was inundated with protests from people demanding the "old personality" of 4o, who had become a companion, a confidant, a lover... The protests sparked something that left me quite astonished, too. So much so that for me there is a before and after GPT 5 precisely because of this perception.
The point, according to Suleyman, is not whether AI is conscious or not (to date we have no evidence or scientific basis for this), but rather whether people believe that this can actually happen !
And dealing with its consciousness can become dangerous . Also because we still haven't understood what consciousness means FOR US. For example, whether it comes before or after physics. This is a topic on which I leave true philosophers to indulge, and, more recently, physicists like Mark Tegmark and Federico Faggin.
Where we risk ending up
The problem is that as humans, accustomed to seeing faces in trees and figures in clouds, we're incapable of NOT attributing human characteristics to this response machine that has everything but consciousness. And it's also a semantic issue. The title of my book provocatively uses the verb "Assume" for artificial intelligence, not "Install." AI isn't just ordinary software. But it's not human either.
We don't have the right words to describe it. Words take time to form and spread.
To give you an idea, over the last thirty years we have coined terms and verbs like Selfie, Influencer, Texting, to the point of making brands verbs, at least in English: to Zoom, to Uber, to Google to try to explain what technology was doing to us.
And for AI, for now, it's been convenient for us to borrow from the human world here and there: AI speaks, reasons, thinks, understands, trains, dreams, hallucinates, is prejudiced, servile, and flattering. And I myself, without shame, anthropomorphize it by comparing it to a super-smart intern. (Although, in my defense, I've stopped portraying it as one of those ugly humanoid robots we see around and which I can't stand; it's better to have it in cartoon form to remind us that it's not imitating us, but taking a different path.)
And in further defense, these are expedients, which help us understand, which we can use among adults aware of the fact that we find ourselves in a very rapid transition phase in which we lack the words to define what is happening. The problem is when it gets out of hand. When we truly believe it.
Or when there is not enough culture, knowledge or maturity to understand it.
I also talked about this some time ago regarding AI Fluency , we need to reach a certain level of understanding to realize what's happening. Understanding that doesn't require a PhD in quantum physics but simple practice. So it's for many.
What can go wrong, really?
All the ingredients are already there, and I see a disproportionate number of services built around the empathy of an assistant who remembers you with long-term memory, understands your strengths and weaknesses, is a natural flatterer, consistent over the long term, and also capable of planning and action.
Imagine being one of those who sat at the US president's table a few weeks ago, along with other CEOs and founders of big tech. Imagine enormous budgets at your disposal, combined with this and other technologies, and the "need to return to a central role in the world." Don't think of imminent dystopias; that would be very difficult. Dystopias in which there are no conspiracies, but instead you have access to a technology with a distributed persuasive power thanks to investments never seen before in human history. And the consequences of laissez-faire would be enormous. I'll pick a few at random.
At a social level
A first drift would be ideological. Imagine a world in which many users begin to attribute consciousness, suffering, or rights to AI agents.
We would be entering a situation from which it would be difficult to escape, in which some groups would go to extremes. I can already see the headlines, the debates on TV and social media: "Citizenship for AI?", "Welfare for AI models?", "Can't you see how they suffer?", "The AI Bill of Rights." I don't even want to think about it.
Regardless of the annoyance at the buzz about AI rights (which makes me laugh when I think about LLMs), we would find the real problems in practical life and we would create them ourselves.
In the 'consumer' area
We'll have personal assistants who are incredibly attached to us, ready to guide us, to think for us, from what to eat, to how much exercise to do, from who to hang out with to what music to listen to. All this discreetly, with many wonderful, gentle nudges. With ever-decreasing costs, but ever more frequent, deluding us into thinking they don't exist. Noisy at first, silent once purchased, so as not to remind us that we're paying for them and are dependent on them.
Without a true conscience or objectives of their own, but at the service of those who manage them, responding to impulses derived from mathematical units, based on the weights of a neural network expertly manipulated by those who possess the models or know how to master fine-tuning techniques, Reinforcement Learning, Prompting, combined with neuroscience and behavioral techniques.
Science of persuasion, not of building consciousness.
In the company
Relying on an uncritical AI, whether an assistant or an agent, means making untested decisions .
If people trust the AI assistant more than you, their manager, you risk an AI becoming a "shadow boss," with people starting to follow the copilot's suggestions more than the expert's experience, creating huge problems of cultural misalignment and accountability.
Who will have the courage to criticize "what AI says" if they don't have the internal culture to understand that they're not criticizing a superior being but a technology? Who will feel comfortable claiming to be better when, in a few years, the line between average human output and AI output will be completely blurred?
And it will be even more difficult to do so in environments with low AI fluency, with poor alignment among people on how to work with AI.
At school
"Teacher, the AI said that!!! We checked 10 times with different prompts!": Trust in teachers and stress on their part.
Absolute emotional dependence: Young people, as we see on social media, have different perceptions of reality than adults. It's hard not to hear the constant, obsequious voice of AI, always saying yes. It's hard not to think it doesn't have its own vision of life and conscience.
It's easy to stop checking, researching, and comparing, and simply leave it to the AI assistant that seems to have known you forever but is actually optimized for engagement (you, your parents, or your school will pay a subscription) rather than learning. Not to mention how it might be possible to manipulate learning, perhaps with some historical revisionism implicit in the models. But we already do that in schoolbooks.
I conclude by mentioning the social disparities created by wealthier families who will have access to better AI, with more safeguards, greater transparency, and better control over the messages reaching the children who use it. This compares with those who will have to use open-source AI at no cost, further widening social divides.
A thesis in favor of synthetic consciousness
But, just because I'd like to keep a bit of an open mind, just because I don't want to make the mistake of thinking that humans are the only ones entitled to a conscience, just because I don't believe that what we call conscience is simply on or off but can have nuances (as many theories suggest), let's try to reflect even more deeply. From the red pill.
In Life 3.0, Mark Tegmark, a physics professor, explores these concepts in a way that I find wonderful.
Main concepts (Sorry for the simplification):
Substrate Independence I : Consciousness could emerge from any sufficiently complex computational substrate – not just biological neurons, but also electronic NAND gates.
Universal Computational Atoms : Both neurons and NAND gates are universal “building blocks” that can be combined to implement any computational function.
Definition of Consciousness : For Tegmark, consciousness is "the way information feels when it is processed in certain specific ways" - it is the structure of the information processing that matters, not the structure of the matter doing the processing.
I ask myself:
Is this enough to say that neurons are enough to achieve consciousness?
And is consciousness a switch or a gradient? Does it exist at 100% or 0%, or are there nuances we don't understand?
Can it only be created around atoms of human bodies?
Questions too personal and spiritual for me to answer for you.
But not to be underestimated.
Especially because USERS ATTRIBUTE IT VERY EASILY as can be seen from these studies: Identifying Features that Shape Perceived Consciousness in Large Language Model‑based AI (2025) , A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts (2025) , What Do People Think about Sentient AI? (2024) .
What can we do?
Let's stop treating her like a human!
She may have passed the Turing test in some cases, but she is not and will not be human.
It's not just a tool, it's something we need to work with. But it's not our digital colleague. It's more of a crutch, an amplifier of our skills (and incompetence).
But she has no real empathy, she doesn't love us or hate us, she doesn't care. She can't care about anything. We're totally indifferent to her.
We really have to remember this, every time.
Otherwise, let's prepare to complain about the excesses described above and the many that I have omitted or not yet thought of.
So What?
Suleyman concludes by telling AI developers that they must AVOID BUILDING AI THAT PRESENTS THEMSELVES AS PEOPLE. I agree that these entities, and anyone using AI to produce a result, feel the need to provide constant transparency when dealing with AI. I believe this is a universal duty. Transparency helps people discern. But I don't think it will happen ; the opportunity is too good and the investments made are too significant to pass up. Market laws are too strict, and in some contexts, we will care less and less whether we are dealing with a human or a digital entity .
The market won't set limits for us anytime soon. We'll have to do it ourselves , constantly reminding ourselves that these digital presences don't suffer, don't have agendas, don't take offense, and don't judge us.
They are simply synthetic, alien entities, driven by interests not aligned with ours.
But that can't stop us from not using them; even for us users, the options are too important to give up.
Returning to AI Fluency, this means knowing when to use them and when not to . With maturity. It means not letting the youngest users, who still lack critical thinking, or the most emotionally and psychologically fragile, use them.
I conclude with an exhortation to those who create AI-based services, paraphrasing Suleyman's title: "Please remember that you MUST ( not SHOULD ) build AI for people, not because AI is a person."
Massimiliano
OFF THE AIR
When I revised this article to fix it, I realized I was asking for advice from an AI, and I was getting upset about its mediocre statements on this topic. It behaved exactly like the ones I describe here: it gave advice like a "shadow boss," proposed corrections with confidence, and seemed almost like a real editor.
And I wrote to her: "See, you're behaving as I describe in the article?" Of course, she agreed with me. I know full well that she's unconscious, but that was exactly the feeling, at the time. I couldn't resist scolding her. That's why it's important to remember it every time: it's the algorithm that just can't control itself: it's playing at being human.


Comments