Welcome to another edition of our "Almost Monthly" column, where I delve into the fascinating world of AI! With AI technology racing ahead at breakneck speed, we find ourselves amidst a whirlwind of developments. Big tech companies are eager to tout their AI models as the best, governments are in a race to regulate first, and the debate between "open" and "closed" models adds to the complexity. The AI landscape is continuously evolving with new models, a deluge of news, and insightful research papers.
I often wonder whether to continue this column in a world where information ages in just a couple of days. But then, a little synthesis can be beneficial. I'd love your feedback – let me know if you find this useful too 🙂
Today, in no particular order, we're discussing several intriguing developments that caught my eye and might interest you as well:
Pika Labs & the World of Generative Video
Pika is an organization developing a tool for AI-generated and edited videos. They recently unveiled Pika 1.0, a significant upgrade featuring a new AI model capable of generating videos in various styles, including 3D animation, anime, cartoon, and cinematic, along with an improved web experience.
So What?
This means that describing what we want in a video will be enough to have it generated 'on the fly'. Worried?
Channel 1 - AI-Generated News Channel
Related to Pika (logically, not literally) is the birth of Channel1: a news channel promising automatically generatedpersonalized video news, without the need for crews, journalists, or editors.
So What?
Is this the end of truth as we know it, or the beginning of a world where facts become news instantaneously?
European AI Act
The European Union has approved a landmark agreement on revolutionary laws to regulate artificial intelligence, termed the "EU AI Act." It's set to be the world's first comprehensive regulation in this field.
The act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. High-risk systems will need to meet special requirements to protect the public.
Notably, the law plans to exempt open-source and free AI licenses from regulation, unless they fall into the high-risk category. The act introduces the concept of General Purpose AI Systems (GPAI), suggesting that models like LLM and their multimodal counterparts could be classified as GPAI.
A Reuters report indicates that the EU has proposed that creators of foundational models clearly document their training data and system capabilities, addressing potential risks and subjecting them to external reviews. However, some countries, like France, Germany, and Italy, prefer self-regulation over stringent rules, fearing limitations on European competitiveness against U.S. firms.
Finally, the European Parliament declared that GPAI systems and the models they're based on must adhere to transparency requirements, conduct model assessments, evaluate and mitigate systemic risks, perform adversarial tests, report serious incidents to the Commission, ensure cybersecurity, and report on their energy efficiency.
Some useful resources here:
So What?
So, politics is trying to regulate, in its own time, a world that's still taking shape. Necessary, undoubtedly, but how effective? The horses have already bolted, and the genie is out of the bottle. All we can do is observe and maintain a willingness to experiment (without going overboard) in the meantime.
Mistral AI
Mistral.ai is the first European (French) company to release a series of LLMs with highly interesting performance and features. Why interesting? Because the training data and model parameters, together with other features, are fully Open. Mistral recently raised 385 million euros, valuing over 2 billion euros, earning the title of a unicorn - the only one in European AI.
Have I been too non-technical? Enjoy reading this🙂
So What?
This means Europe is finally joining the global AI race. I'll be dedicating some time to this model because it shows great promise. Let's see what happens.
The AI Alliance vs Frontier Model Forum
Some call it the “Club of Everyone Else” – the alliance aiming to "coordinate" the development of standards, benchmarks, tools, and pursue a concept of Responsible AI with innovation, political influence, and the establishment of ethical rules achievable only with open models.
Key members include Meta, IBM, AMD, Intel, and many others.
Choosing not to join AI Alliance, Google, Open AI, Microsoft, and Antropic formed the “Frontier Model Forum” with their closed models. Their reasoning? They believe Responsible AI with innovation, political influence, and ethical rule setting (Yes, you just read this above) can only be achieved with closed models.
Gemini by Google
Speaking of closed models, Meta recently released two of the three GEMINI models: multimodal AI models challenging GPT-4's dominance.
The models:
NANO. A Small Language Model ready for your smartphones.
PRO. The middle version, akin to GPT-3.5.
ULTRA. The version aiming to rival GPT-4, available in 2024.
So, why is Gemini significant?
Multimodal AI: Gemini is natively trained and works with text, images, audio, and video, potentially leading to more comprehensive AI applications.
Comparable Performance: Its capabilities (in the Ultra version) are similar or slightly better than GPT-4, indicating steady progress in AI technology.
Diverse Applications: Its ability to understand and generate different media types could open new possibilities for AI use across various sectors.
Mobile Integration: With Gemini Nano on Pixel 8 Pro, this advanced AI technology becomes highly accessible on smartphones.
Enhanced Understanding: Gemini's improved reasoning and linguistic abilities, given its "Multimodal Origins," might lead to more efficient and effective AI interactions.
Comments on the Pro version compared to Bard? In my opinion, it's another GPT 3.5 (and there are many at this level now).
Concerns? What about my data in a free Google product?
OPEN AI Drama
There's been so much written about Sam Altman's firing and rehiring that I don't want to add more, but it does raise a reflection.
How difficult is it for a company to invest in a market with such levels of uncertainty?
The difficulty scale has been upped by this “brain brawl,” and when I talk with companies, it's hard to rationally motivate trust in what's currently the world's most important AI company.
However, it's crucial not to just stand by: Experimenting and taking risks at this stage is the only way to be ready when the market stabilizes. Starting to work then will be too late…
As always, I invite you to reflect, comment, and spread ideas by sharing this post with people you think might be interested.
To stay up to date on my content:
Read Glimpse, my novel on AI (Hang in there - English edition coming soon!)
Or contact me here (Especially if you think your organization should need a pratical AI Workshop!)
See you next time!
Massimiliano Turazzini
Comments