A very young student of a business school, during my speech, asked me one day: "... but in essence can these Generative Artificial Intelligences do harm?
And my answer, without thinking too much about it, was a curt: "Sure, if you don't know how to use them", and I listed a few problems that in my opinion must be addressed by who makes intensive use of LLMs and Generative AI in general.
But since then this question has haunted me, so I decided to elaborate it by analyzing the symptoms I'm experiencing and comparing myself with the Internet on the subject.
But does this stuff hurt at the end?
Aware that in my mental bias the following stands out among the first:
If anything can go wrong, it will Artur Bloch (Murphy's Law)
I will try, with this post, to answer this question.
In a previous post I said that this new way of learning/working is a journey in which it is necessary to understand day by day where we have arrived and whose destination we still cannot see.
… next time you use an artificial intelligence, remember that you are the human, the one who knows, the one who decides. Take responsibility for your words and thoughts. AI is just a tool.
Also remember that when you use an AI you are learning. You are learning about technology, but also about the issues you are asking about, and especially about yourself: how you think, how you express yourself, how you define problems and how you interpret answers. You are developing new skills, new ways of thinking and new perspectives.
With that in mind, what could go wrong with using Chat GPT, Claude, Bard, and the myriad derivatives that are springing up?
I'll say right away that I don't intend to deny anything written up to now, but after a bit of experience on the subject, I think it's right to share some side effects that I'm experiencing and that sooner or later we'll have to deal with.
You won't find the Gen AI flaws here; those are already very many and documented. I will try to focus on the risks we run in an unconscious use. A bit like with alcohol 🙂
The Big Illusion
Let's start from a concept that is important to clarify immediately. The fundamental mistake that many have made with regard to Gen AIs is to think that they are actually AGIs or ASIs: that is, General Artificial Intelligences or Super Artificial Intelligences as in Glimpse, capable of doing almost everything foolproof.
Also driven by the proclamations of the great AI CEOs who announce that we will have an AGI within 2-3 years, some are throwing their heart and soul into this first fraction of "intelligence", creating false expectations and looking for solutions that do not belong to them (often finding some outputs that try to give a solution that, by nature, I am not able to give).
From this illusion, more or less conscious, derive frustration for the mediocre results that can be obtained and FOMO, fear of being cut off, by the great wave of GenAI: the fear that if we don't adopt immediately, someone will do it for us and cut us off.
As far as the texts are concerned, it is therefore good to remember that, although with astonishing results, they are nothing more than software that predict words that go well together without much regard for the truthfulness of their content.
Laziness
The concept of “ask Google” is well known to all: don't know an answer? You can find it on Google and you can immediately exhibit it as your expertise. The strengths and weaknesses of this aspect have been on everyone's lips for over twenty years. The slow consultation of encyclopedias as a source of knowledge has been swept away by this advent and now we can no longer do without it.
Google in any case requires a bit of mental dribbling:
hard to click past the first link,
accept the various pop-ups,
evaluate how to behave with cookies,
quick reading of the article and then focus on the content that we think is good for us by hopping here and there between the ads.
Gen AI, better if integrated with a search engine, give us the possibility of having instant access to any response.
Even supposing we are good at limiting hallucinations we get an answer that we simply have to read to feel "learned" and have some extra free time to do other things.
Laziness is very often a great driver of innovation but it can lead to uncontrolled mental stagnation. And my laziness was initially excited about the arrival of Gen AI. Within seconds or minutes, I could access a lot of human knowledge much faster than Google.
And an assistant who could write for me.
Too bad:
In exchange for being able to have a structured answer in seconds, you stop thinking critically and trust (too much) what you are reading. Bard and Bing Chat provide the links from which the information came but… why not trust and check them? It is assumed that what they have told us is better than what we could understand.
More detail (for the non-lazy)
When we type a text page it takes a few minutes, and in that time, we activate our memory, move our hands, strain our expression skills, delve into some concept that is not clear to us, memorize the content we have written and we make sure it 'belongs to us'. If we let Gen AI do this job, we don't do it. And especially if we are lazy to make long prompts, as it should be done, we even get poor results.If we don't write a text, we actually stop remembering in detail what we wrote and we will probably also forget what we asked with the same speed
More detail (for the non-lazy again)
Laziness then leads us to "dependence on the instantaneous
Lately I no longer ask Google but Chat GPT or Bing Chat (which at least provide me with some links to learn more). Always reminding them to give me correct answers 😊.
My prompt history is filled with chats with one question and one answer. And as much as I try to tell myself that my prompt was complete and on the verge of perfection to get a complete and non-hallucinated answer, I can't completely believe it.
But the possibility of having a short answer is too important for me: instant gratification.
Instant gratification is something I've been studying for years and which is now present everywhere in the digital world - not only. It's an effect sought to manage user engagement and, I think, it looks a lot like an addiction (but I'll leave that aspect to the experts).
Dependence on the instantaneous can give us momentary happiness but it makes us impatient and stressed because we want immediately exhaustive and coherent answers (with our thoughts), it prevents us from pursuing long-term goals that they require effort and patience, it gives us immediate pleasure in exchange for an absence of future reward.
And often, in processes that include relationships with other humans or systems that take time to provide answers, we risk getting terribly bored. Wrong.
Reduced ability to communicate with other humans
No matter your religious views, the point is helpful and instructive: humans are naturally interconnected—biologically, emotionally, psychologically, intellectually, and spiritually" (Arthur C. Brooks, From Strength to Strength)
To introduce this concept, I thought of designing a wonderful infographic to be more incisive and to motivate that the best interactions are human ones. But I'll tell you about it because I'm not a good "infographic" (Indeed, if any of the readers want to contribute and create it, I'd be very happy 🙂)
Imagine a large blank sheet (or screen) with two little men having to communicate with each other throughout the story. At the beginning of the evolution, the two little men begin to gesticulate with each other, then to speak in front of each other. Continuing the story, the two men invented messengers to exchange information remotely while remaining free to mind their own business in the meantime. Then the two think that perhaps it was better to make the contents more private and put them in a sealed envelope and send them first through an intermediary and then by post.
But it took too long, until the two started exchanging morse code messages, or talking on the phone instantly. Then the arrival of the fax and e-mails allowed the two to increase the frequency and quantity of communication, making them very happy.
And from there the chats, in which the emoticons were born because the two men forgot that in addition to the written text they had to transfer emotions, the vocals and finally the video calls "So at least we look at each other".
Occasionally the two men still find themselves talking to each other, they need it because all those means are insufficient to fully express themselves, but they still prefer to do it using instantaneous electronic means.
Meanwhile, a new little superman has been born, one who says he has all the answers they want. A little superman who, they both assured everyone, knows everything. So the two of them no longer need to talk to each other: each can obtain information from the superman instantly and without asking the other for things.
But they started chatting directly with a subject who looks human but isn't, who responds immediately but doesn't feel emotions, who isn't controlled by humans, who created him but doesn't know what answers he will give.
However, the little superman has incredible abilities: for example, he has the ability to allow our little men to chat directly with a book instead of reading it, to ask questions of the book or a document as if he were another little man too. So they also saved themselves the time to read the whole book or document by trusting the super-thing to provide specific answers working for them. And they stopped reading books, documents, emails.
The two men are very happy with the little superman because they save a lot of time. But they miss when they were chatting in person. And they try to do it every now and then, but they don't succeed as well as before, and even if they don't know it, they're sadder.
With the Gen AI Chats we risk isolating ourselves in a bubble of conversations with a static entity who tries hard to understand our misguided questions and still provides us with answers to gratify us, not caring if they are correct , positive, coherent.
We risk getting tired of talking to our fellow humans, looking for information between the lines, interpreting their behavior, we risk isolating ourselves.
An entrepreneur who halves his team with AI tools is halving the social interactions of his remaining collaborators every week. The convenience of obtaining information in the company directly from AI will only further reduce interactions between colleagues and eliminate those 'magic' moments in which, we don't know why, the best ideas are born, perhaps over a cup of coffee.
But humans need other humans because isolation reduces social skills and emotional intelligence and makes us less good at understanding others and connecting with They. We risk no longer being able to "function" with others, no longer knowing how to connect "deeply".
Deep understanding
If we are satisfied with the first answers from Gen AI, we will give up on understanding more in detail, on listening to ideas that are different from ours.
We give up getting rich, taking the training journey behind every creative process, questioning ourselves and understanding the topics in depth in the name of our laziness and speed. We give up on evolving.
Feeling dumb
After a bit of intensive use of Gen AI for work reasons, I feel the need to use them more and more when I have to write complex texts. I think GPT could answer a mail better than me, that could do it faster, that could put more meaning into the text. And all the marketing around Gen AI just confirms it for me.
Basically I feel dumber than her and it makes me think that all the text I produce (and it's a lot) should go through a Gen AI.
Then I remember that I also tried to write these posts hoping they could write a little better than me. But I abandoned the idea after having to rewrite an entire post from scratch: he can't express himself -badly- like me 🙂
Have you ever felt inferior to one of these AIs?
Isn't there the risk that we give up being creative by delegating this task to AI?
That we generate self-limiting prophecies about ourselves that will only increase the use of these tools and reduce our intellectual abilities?
Reduced writing skills?
In an era in which writing is one of the arts that belongs to fewer and fewer people, in which abbreviations, acronyms, excessive synthesis abound in the messages we send each other, in which we read in depth less and less what happens is arguing a complex question in a structured way is not for everyone.
Gen AIs are very good at interpreting our ungrammatical and misplaced questions trying to generate correct answers. But the fact remains that the quality of the answer largely depends on the quality of the question.
Maybe using AIs can help us improve this ability?
Writing a question well to one of these AIs provides far superior results. And it is tangible and demonstrated.
Could this reward be the beginning of a renewed willingness to return to reading and writing better and more?
Intellectual stagnation
I'm going technical.
Models are trained on a data set, and once the training is complete, the model essentially stays the same, it doesn't change anymore.
Gen AIs don't learn from your conversations unless they're subsequently retrained in a new model that includes them. But this process is and will always be more complex and expensive.
It's a bit like trying to convince a nonagenarian (I don't mind) to change his mind about any new concept of life: with exceptions, the effort will be enormous: he has his own model of thought and is not willing to change it.
So each model you will relate to is “stuck in the zeitgeist of its training data and there is nothing you can do about it except use a model trained with new data provided by us.
And what if at some point we are no longer able to generalize new content for training models? (See the post on expectations and results of the dialogue with the AI)
So, do they hurt or not?
I confirm that I am an enthusiast of this technology: the fields of application of the LLMs in the business and personal spheres are many and in this post I stopped to analyze some negative impacts that can only occur when used in Chat mode.
No tool is only positive and you need to be aware of it.
Gen AIs do not harm: however, the use we can make of them and the ways in which we approach them can harm us.
Among the various flaws, however, a big advantage emerges. Never as in recent months have I made an effort to structure complex prompts, imagining having to ask the most absurd things, articulate questions to best please the understanding of the shift algorithm and avoid hallucinations like the plague.
All in both Italian and English. Because models work much better in the language they were trained in. And my written English is much, much improved.
Understanding that the quality of the input determines the quality of the output we can therefore be encouraged to try to explain ourselves better and to write better questions; understanding that the first answer is not the one that counts and that these tools can also be used to ask ourselves questions, we could use them to grow and develop more, understanding that they can provide us with "Good Enough" answers but this is not always Good Enough" so we will increase our critical spirit and the desire to deepen the topics.
And I hope that this will lead to the need to read more and in depth and, perhaps, to make us want to face some discussions in person with the people around us.
Glossary
As always, I invite you to reflect, to ask questions, to spread ideas by sharing this post with people you don't know.
And if you haven't subscribed to the blog, do it now to stay updated.
If you want to read a good story I invite you to read Glimpse, the novel I written on artificial intelligence.
If you liked this post, or not, let me know in the comments or contact me. See you next time!
댓글