top of page

ChatGPT and Memory: What's Changing

Chapter 1 - Speculations


Have you ever stopped to think about what would happen if ChatGPT remembered everything you said? Well, OpenAI's Memory feature is already active (with the exception of Europe, the UK, and a few other countries), and it raises a lot of interesting questions about the future of our interactions with AI. 🤔



Collaboration between ChatGPT 4o and Midjourney 7 to define the blobber I talk about below
Collaboration between ChatGPT 4o and Midjourney 7 to define the blobber I talk about below

I've started working on it, but I don't have all the elements to discuss it in depth yet. So, this first chapter of 'speculations' collects some initial thoughts on a topic that will surely keep attention high for the next few months (and beyond).



In fact, up until now, my favorite slide at the workshops was this one:


It remains true for LLMs: Once trained, they cannot receive any new information.

But around LLMs, there is now a world of software and technology that helps them keep a memory of a thousand things. And as I wrote only in January, everything around contributes to providing content within the context window of the model while we serenely chat.



What is the Memory Extended function?

Memory is a new ChatGPT feature launched by OpenAI that allows AI to remember information from past conversations and use it in future interactions. Unlike simple chat history, which keeps individual conversations separate, memory creates a persistent model for the user across all interactions.

ChatGPT can remember personal details, preferences, ongoing projects, and other shared information over time without the user repeating it in each new conversation. The system does not literally memorize every word exchanged. Still, it seems to create (there is no documentation, yet, about this) summaries and syntheses of the most relevant information, gradually building an understanding of the user.

Unlike the current Memory, which remembers some sentences and can be maintained by adding and removing memories, this one seems more like a muddy blob in which she remembers what she wants. For example, she remembers me asking her to forget about a useless conversation two years ago about an idea for a character in a book I was writing. And she promptly comes back to tell me, "Like the character, GIGI, that you asked me to forget."


While in the official demo, everything's smooth.




Thankfully, this feature is optional and can be turned off anytime, allowing users to control what information they want the AI to remember or forget. As a good friend of mine says, “At least that!”

But I'll wait to disable it.


The substance

I have had over 5,000 conversations with ChatGPT (at least the ones I have saved). These conversations are not "me" in their entirety but fragments of my digital history—little puzzle pieces that show some aspect of me based on what I have written.

But what I told the AI is a small part of who I am and what I have told in my life.

It's not me.



The digital fragments of ourselves

Imagine having a friend who only knows you through your chats. This friend has never seen you in person and doesn't know the tone of your voice or how you gesture when you're excited. All he knows about you is what you've written to him.

When we write to ChatGPT or to the AI you like to use, we share only a select fraction of our experiences. We talk about projects, ask questions, and ask for advice, but this is just a tiny part of who we are. We don't share the emotions we feel watching a sunset, the spontaneous conversations with friends, or the fleeting thoughts that cross our minds during a walk. We don't tell her who we respect, who we can't stand, who we care about, and what we are indifferent about.

Even if OpenAI recorded our lives like in "The Truman Show," it could only observe what we do and infer some thoughts from what we ask. It's like watching a fish in a tank: we can see what it does but don't know what it thinks.

However, the problem is that we will assume that ChatGPT knows us better than anyone else.


The Dark Sides of Memory: A Muddy Blob of Personalization

As fascinating as it is, this feature risks generating an unmanageable blob of data, a muddy pile of information from which it will be almost impossible to understand what ChatGPT actually "sees" about us.

Of course, this process of partial knowledge and interpretation is no different from what happens with anyone who knows us. Still, there is a fundamental difference: ChatGPT is not a person. It has no human intuition, empathy, or the implicit sense of social context that drives human interactions . This blob of artificial memory could create unexpected problems: persistent misunderstandings, faulty inferences that crystallize over time, or awkward moments when AI unearths information in inappropriate contexts.

It may also influence responses in subtle and difficult-to-detect ways, steering care in directions we might not have consciously chosen.


Want to read more?

Subscribe to maxturazzini.com to keep reading this exclusive post.

 
 
 
bottom of page