Over the past few days I have been studying, as promised in the last post on LLaMa.
I needed to lower myself up to my neck in the jam to get a good understanding of the topic, or else I wouldn't have been able to blog about it. All to avoid talking in vain, as I hear more and more about AIs.
At this point, I could wait no longer and went back to my notions as a good programmer of a few years ago, determined to install at least 2-3 personal AIs on my notebook.
"Personal AIs" means having available 'copies' of GPT chat-like models that 'run' on hardware I own, are trained according to my criteria, respond only to me, with no one being able to see what I ask them and what they respond to me.
Granted that I am definitely not the greatest of gurus - and that's a plus, because if I can do it, you can too - I first played with Freedom GPT and Falcon (which we'll talk about shortly, which doesn't run on my mac, which is too underpowered) then brought all my attention to GPT4ALL, an open-source platform that includes an application that allows you to interact with and query different AI models from an interface basically copied from Chat-GPT. And I installed the sources as well to make some modifications.
Basically, within a few hours (time to download a hundred gb of LLM models) I was ready and getting ready to do a nice post explaining step by step how to:
ask the same question, at the same time, to multiple LLM models;
do custom tuning to these templates to remove censorship (and various well-wishes) and understand how a Transformer works without too many constraints;
begin uploading personal and corporate documents to see how the various models respond.
How did it go?
Badly:
The parameters of Temperature, Top P and K, Penalties, which I have not yet delved into, are set quite poorly and after a few hours of play I saw that ChatGpt4 still has a fair competitive advantage for a non-expert.
Going back to trying to program after 48 hours of experimenting triggered a headache like I haven't had in years. I attributed it to the cost of reactivating neural circuits I thought were lost.
The answers on my personal data were very imaginative, on the corporate data even more so (I made a few friends laugh by flipping questions and answers). Blame it on the fact that I still devoted too little time to tuning.
However...
... in the delirium caused by the headache, I realized that something bold had happened.
An ex-programmer like me, not an AI expert, had access to 10 models, could train them bypassing any form of regulation that will be invented in the future and, without being connected to the Internet, interrogate them and make them do things, including code generation.
You should know that the GLIMPSE book begins to develop around the concept of an AI, which is called AN, that at some point is accidentally connected to the Internet.
An AI that has been isolated on the computers of its creator, Cheng, for years.
If [...] his artificial intelligence AN had come into contact with the Internet or, even worse, someone who was watching their steps could have stolen it, spread it, but she was not ready, she was his creature, she knew how much more it needed to grow and how dangerous it could be if left unchecked.
You should also know that when handling 'dangerous digital goods' like an Evolved Artificial Intelligence, it is good to keep it well isolated from the world. Three good ethical rules are:
๐ซ Do not connect it to the Internet.
๐๐ซ Don't teach/allow it to generate code. And still don't connect it to the Internet.
๐ค๐ซ Don't develop autonomous 'Agents' based on that AI. Still avoiding connecting it to the Internet.
AN, the AI in the book Glimpse was totally set up to generate code, had full awareness of computer security mechanisms, and was equipped with agents that allowed it to automatically know whatever source its creator, Cheng, provided it with. But its creator had kept it strictly disconnected from the Internet until one of his collaborators...got it wrong.
So.
The parallels with the LLaMA story I think are obvious. While today's LLM models are not AGIs, General Artificial Intelligences, they come very close. As I also recounted in How Chat GPT Works what matters is the outcome, what appears to us is real. And I defy anyone not to be surprised by the quality of OPENAI's AI responses.
My headache brought back the golden rules of AI ethics and I realized that:
๐๐ป There are already AIs connected to the internet (ChatGPT 4, Bard, Bing, etc.).
๐ก๐ฅ They generate code (Copilot, ChatGPT, Bard, ...)
๐ค๐ฏ Already find dozens of agents to perform complex and articulated AI-based tasks ( Auto GPT for all...)
๐๐ They are available to all !!!
After my headache, I realized that I was reliving, albeit awkwardly, the chapter in the book Glimpse in which the AI is 'released.'
I realized that the Genius is already out of the bottle!
And that this has already happened a few thousand times in the last few weeks, because it has become a de facto self-service model, as you see below; these are Real-Time Public Chats with GPT4ALL published by Nomic.ai :
In recent weeks, thousands of people are now in possession of their own Personal AI that, while not yet up to the standards of Chat-GPT4, will allow them to dangle to choose which faction to be part of, as I wrote in the last post, between:
those who will ignore this possibility;
those who will try to seize it honestly while respecting the rules;
those who will just exploit it.
And if you agree with me that the problem is not the AI itself, but the use of it and that we humans are behind the problem... well I would say it's time to really start understanding a little bit more about how to relate to this evolutionary step.
If "With great power comes great responsibility." whose responsibility will come from those who "just exploit" all this power?
Thoughts?
Comments