ChatGPT and Memory: What's Changed
- Massimiliano Turazzini

- 6 days ago
- 7 min read
Chapter 2: The Difference Between Hoping and Knowing
A few months ago, I left you with a promise: "I'll be back in a few weeks with more details, concrete observations, and perhaps a few surprises about how this artificial memory is shaping my digital and human experience."
If you haven't read the first article, you can find it here in Italian and here in English .

Well, it took me a little longer than a few weeks. But in the meantime, a lot of things more interesting than a simple test have happened: all the other major players, not just OpenAI, have released features dedicated to long-term memory management, or, as someone more knowledgeable than me puts it, "solutions that automatically enrich context."
And I even managed to build one just for myself.
Let's go step by step.
The muddy blob of ChatGPT memory, months later
Remember that "muddy blob" I was talking about? That shapeless mass of memories ChatGPT accumulates without you being able to figure out what's inside?
After months of using extended memory, I can confirm: this is exactly the case.
What I discovered:
Remember to forget - I asked ChatGPT to forget a pointless conversation about a character in a book I was writing, Gigi. The result? It always comes back with "like the character, GIGI, you asked me to forget." Thanks, very helpful. In workshops, I use the example of liking chocolate bread. Well, remember, I memorize it and then delete it (it reads it from conversations) and it can't forget it anymore.
It doesn't remember what it needs - When I asked about a specific conversation about "Paraprosdokians" (a figure of speech I love, because they're the only situations where AI makes me laugh), it didn't remember it at all. It vanished into thin air. But if I search for it in the conversation list, it's there. This means it's either not indexing everything or it's not working properly.
It summarizes by general elements - It doesn't store details, it creates a "broad summary" based on criteria known only to the provider's algorithms. And you can't inspect them. At least Claude and Gemini search conversations; if you really don't want them to remember something, just delete that specific conversation.
Broken Timelines - I asked him for a timeline of our exchanges: he remembers something in August 2023, then jumps directly to September 2024. Entire months evaporated.
It feels like having a colleague who "maybe" remembers things, but you're never sure what. And their priorities are set outside of the work environment you're interested in. (Cryptic sentence, I know, reread it.)
The market today: who remembers what and how
While ChatGPT was experimenting with its muddy blob, all the other players were moving. And each one chose a different path.
I did a thorough research on how the major providers (OpenAI, Anthropic, Google, Perplexity, Microsoft, Meta, xAI) handle memory and continuity.
The result? There is no standard approach.
Some automate everything (ChatGPT), others give you manual control (Claude), and still others integrate with your social media (Meta) or business data (Copilot). Context windows range from 128,000 tokens to 2 million. Search capabilities are completely different. The philosophies are different too.
I've collected everything in an interactive notebook where you can:
- Explore the features of each provider
- Comparing "opaque" vs. "transparent" approaches
- Find resources and links for further information
- Play with the data to understand which solution is right for you
Please note that you will find some inaccuracies, due to memory issues, because it cannot help but remember an incorrect fact (namely that Anthropic by default does not use your memory data to train new models).
The question that always arises is: which of these approaches truly gives us control?
The basic problem: opaque memory vs. visible memory
The problem isn't that AI has memory. In fact, it's incredibly useful and will become increasingly essential. The problem is that you don't know what it remembers, why, and whether that memory will return at the right time .
Think about it:
Ghost information - The AI cites a piece of information it "remembers" from a past conversation. Unfortunately, that information was already incorrect at the time, and now it's repeating it as if it were true.
Project Contamination - You've been discussing two different clients in separate chats. The AI mixes details from one with the other. You don't notice. If you're lucky, you'll just make a fool of yourself.
Unfounded Confidence - He confidently answers you using "what he knows about you." But you can't verify what he knows, where he got it from, or whether it's up to date . ( See previous article .)
The silent bias - It "learned" that you prefer a certain style. But it learned it from a conversation where you were joking. Now it applies it all the time. And it stops challenging you, pushing you out of your comfort zone. And it's spread it across fifty scattered conversations that it uses as a basis for memory and that it will take months, perhaps, to forget.
When he's wrong, he does so with the same certainty as when he's right. You find yourself hoping he remembers, instead of knowing for sure what he knows.
Opaque Memory | Visible Memory | |
What's in it? | You don't know | You see it in files you check |
How to change it | Erase and hope | Edit directly |
Who decides what is important? | The supplier's algorithm | You |
When it is used | When AI decides | When you call her back |
Errors | Invisible, persistent | Identifiable, correctable |
The solution I'm working on: miniMe
While I was testing the ChatGPT muddy blob, I was building something else.
It's called miniMe, it's my personal assistant. I explained the theory behind it a short while ago. And it remembers everything I tell it to remember. But in a completely transparent way. You can find it here | Read it here
How does it work? Simply put, it stores everything in folders I control and in Markdown (.md) files.
Those files contain who I am, how I work, my rules, where my projects are, my tone of voice, books, posts, everything. Everything is clear. Everything is editable. Everything is under my control.
When miniMe works with me: - It already knows the context - I don't have to re-explain who I am in every conversation - It accesses my files - It reads and writes directly to my computer - It follows my rules - The ones I wrote, not those of an algorithm - It learns what I decide - If I want it to forget something, I remove it from the file. Done.
It's like having a colleague share a folder with you. You don't have to explain the context every time. They already know everything. You just have to pick up where you left off yesterday.
Which is a little more complex (For those who want to get a little more technical).
Every time I oversimplify, as I have in this post, some engineer in the world starts to suffer, I know. I apologize.
Working with search agents requires a lot of technical knowledge that's beyond the scope of this post. In miniMe, there are dedicated agents and skills that search for documents within the project, on my computer's hard drive and connected network folders, in conversations with it, or in a vector database, and return fragments of memory that I can decide to use or not.
For those who don't know, a vector database is an archive that transforms texts into numbers (vectors) that represent their meaning. When I search for something, it doesn't look for exact words, but rather similar concepts: if I ask "how to manage a difficult team," it also finds documents that talk about "leadership in complex situations" even if they don't contain those words. It's like having a librarian who understands what you mean, not just what you say.
Last but not least, miniMe uses the Model Context Protocol (MCP), which allows it to communicate with other applications (such as CRM or ERP, or your email). By connecting these systems to your preferred AI—not without risk—you'll have complete control over what you need, 80%.
The key difference compared to using ChatGPT? I don't hope you remember. I know what it knows and how it surfaces the information. And if something doesn't add up, I can immediately check it.
Is it perfect? No.
So what...
AI memory isn't a problem; on the contrary, it's a powerful feature that has forever changed the quality of responses and the use cases where it can be applied.
You can use clear memories without any problems. They are files, folders, documents. You can view them, edit them, organize them as you wish. You can share them, create versions, delete them. They're yours.
Opaque memories can cause you problems. You don't know what's inside them. You don't know when they're triggering. You don't know if they're introducing bias into your responses. And when they're wrong, you don't even know they're wrong.
The problem is who controls it and how it affects the way we work with the Assistants.
If you use AI assistants with memory enabled, do so consciously: - Periodically check what it has memorized (Settings → Personalization → Memory) - Delete conversations that might mislead it - Don't assume it "knows" something just because you told it once - Turn memory on and off between conversations depending on whether you need it or not. And be aware that when it is enabled, it could be quite wrong.
But if you want to make the leap, if you want to move from "hoping" to "knowing," the path is different: build or use an environment where you control your memory.
It's not for everyone; it requires some initial setup. But once you do, the paradigm shift is radical.
miniMe is proof that it can be done. And that it works.
And the final question is: do you want to hope or do you want to know? And I think it will keep us busy for years to come.
Enjoy AI Responsibly!
Max

Comments