If I told you that 80% of AI projects fail, how would you react? And if I added references to posts, articles, and podcasts predicting the burst of the AI bubble over the past months, would it change your perspective?
But before we dive in, keep this in mind: most IT projects fail. Historical analyses reveal that in the 1990s, only 16% of IT projects were successful, with the peak over the last 30 years being just 35%. So, don’t let those numbers scare you. Instead, focus on the real question: How can you approach AI projects to minimize the chances of failure?
Who's to blame?
When an AI project fails, is it:
The AI itself (hallucinating, not performing as expected, or unfit for your needs)?
The organization (lacking the right skills, using messy data, or missing proper infrastructure)?
The user (spending too little time learning, misusing tools, or succumbing to what some call “algophobia”)?
Let’s dig deeper to uncover why AI projects fail and whether they are still worth pursuing. Spoiler: they are.
Setting expectations: Are we asking too much from AI?
Inspired by the Gartner Hype Cycle, let’s agree: we’ve reached the peak of inflated expectations. AI is everywhere, attributed with near-magical powers. Compared to AI, even Harry Potter’s magic seems mundane.
We could all agree that we are at the peak of expectations. AI is everywhere, and we attribute miraculous and transformative powers to it—Harry Potter’s magic pales in comparison. (Edit: Meanwhile, Gartner has confirmed this view).
But this is a mere illusion.
AI, however advanced and powerful, remains fragile, imperfect, unpredictable, opaque, overly compliant, and extremely complex. Viewing it as a magic and immediate solution for every problem can only lead to frustration and disappointment, causing the failure of costly projects for the company.
We can probably agree that we are somewhere AFTER the peak of expectations, on the downward slope leading to the phase of disillusionment.
This, in my experience with other cycles, is my favorite part: this is where real projects start, where AI is put to the test, and where we can finally work in peace. I don't know how long it will last, but it doesn’t scare me at all—in fact, I’ve been waiting for it!
How Much Time Do We Devote?
What I believe is important to consider—and here I connect with the concept of "Insufficient Time Assigned" from the first diagram—is that AI requires time to be understood.
To extend Ethan Mollick’s analogy of an AI intern: it’s impossible to assess AI if you haven’t spent at least a day and a half understanding how a model works. It's virtually inconceivable to have ideas and solutions without dedicating at least 15 minutes a day to trying it out, putting it into practice, and using prompts to interact naturally, testing what it can do in real-world situations.
Devoting little time to AI = seeing it as a magic wand = failure.
Poor Data Quality
Garbage In → Garbage Out… we all know what that means. You’ll find millions of results if you search for it on Google.
Your data isn’t good, it’s not clean, and it’s not ready for AI (which requires data in a specific format, especially in the case of Machine Learning or Deep Learning).
Let’s flip the script to Gold In → Gold Out, meaning, if you input gold, you’ll get even more gold.
Instead of starting with massive data pipeline projects to create data lakes the size of Lake Michigan, focus on the few golden elements you know you have.
At the beginning, work on Data Puddles—small pools of data filled with valuable information—or Data Poaches, as I like to call them, which implies filling them temporarily, while awaiting a more structured process.
What Is a Data Poach?
Think of a Data Puddle, a targeted puddle of data filled with the essential and most useful information. Now imagine this puddle not as a structured and complex collection, but as an ad-hoc, flexible resource, waiting to be structured into a more organized and complex project. This is the idea behind Data Poach.
Using "Poach" also suggests an unstable, tactical environment that’s not necessarily illegal but operates a bit outside the usual rules.
Ad-hoc Nature: No need to immediately build a full infrastructure to collect every bit of available data. Start with what you know is useful and put it in a safe place (a folder in the cloud, for example).
Focus on Quality and Utility: Data Poaches shouldn’t be huge. They are about gathering only the data strictly needed to solve a specific problem or explore a possibility. Therefore, rigorous validation processes aren't essential at this stage—it’s more of a sandbox for tactical, practical experiments.
Flexibility: Data Poaches don't require rigid structures. They are temporary "containers" of data, helpful for testing, iterating, and laying the groundwork for a more ambitious project. They are born and die quickly, serving only to prove a hypothesis.
This approach will allow you to experiment more, evaluate your project’s results with fewer constraints, and still become part of your knowledge base.
In fact, if we continue playing with English, you’ll end up with a fun Splash Zone, an area full of puddles to experiment with, which will gradually connect and form your Data Lake.
No Data (or Bad Data) → No AI → No Business
How Does Your Organization Respond to Your AI Project?
Poor organizational response, lack of awareness, and alignment among the people involved are cultural factors present in every company.
Internal conflicts between different perspectives—IT vs. other departments, management vs. end users—do not help.
With AI, these challenges can be amplified due to lack of expertise: AI is still an immature field that, until recently, was accessible to only a few experts.
But AI is also, likely, the biggest change management process in the history of modern business, and will suffer from all the problems that emerge during organizational changes.
Those who follow me know what I think: AI is a process, a journey. You can’t just adopt it without planning—including training, testing phases, and accepting failures.
Moreover, the first problem I often see is the lack of awareness of what AI, Generative or otherwise, can actually do. Coupled with too little time dedicated to using it, this results in the following effect, noted in an international report by Bain.
What’s shown here—though it comes later in the process—is that when people start using AI, it often fails to meet initial expectations. But that "lack of understanding how to utilize tools" cannot be solely addressed through structured training. AI is too fast-moving, and the risk of learning outdated methods is always present.
To be clear: I’m not saying training isn't necessary, quite the opposite! But, going back to my first point, we can’t expect AI to read our minds if we don’t understand how it works.
Continuous learning and experimentation are required from every role. We can’t wait for an AI Head to be hired before we start using it. (Pardon the plug, but here’s my book, where I explain everything step-by-step).
Not convinced yet?
Here’s Article 4 of the AI Act, effective from February 2025. The deployers are you, the ones using AI.
But What Does "Failure" Mean?
In an IT project, failure occurs when a project is not completed, is delayed, or has costs exceeding the revenues, resulting in negative ROI.
However, looking for direct ROI from an AI project, especially at the beginning, is the surest way to fail.
If your goal with AI is to immediately measure its contribution to your profit margins... prepare for bitter surprises.
So what should we look for?
As I discuss in my book, the true return on an AI project is measured by the transformative value it can generate. Here are some examples:
Improved Operational Efficiency: Your AI won’t work miracles, but it can automate repetitive tasks and free up valuable resources. Think about the time saved by your employees and how they can focus on higher-value activities.
Strategic Insights: AI analyzes mountains of data in seconds. It gives you new perspectives on your processes, customers, and markets. This is invaluable, but it’s not something you’ll see immediately in the quarterly financial statements.
Experimentation and Learning: Investing in AI also means learning how to interact with it. Experimenting with small projects allows you to build internal competencies, test technologies, and—most importantly—fail safely.
Future Flexibility: Adopting AI creates a technological and cultural foundation that prepares the company for future opportunities—often unpredictable ones—that you wouldn’t even consider without AI. It means knowing what to ask your partners when developing AI projects and understanding the potential limits.
So what...
Failure is always a possibility, and it will likely happen. That’s why it’s crucial to start with small projects that won’t cause too much pain if they fail.
Begin with projects that have high potential for success and that you can control, even if they involve manual or semi-manual processes.
Gradually, you’ll get better at it.
During the execution of these small projects, your team will be able to experiment with what AI can do, learn that different models are needed for specific goals (one day I’ll put this crucial concept in writing). You’ll realize that spending a day with the AI intern and then dedicating 15 minutes a day to it can only benefit you and your organization.
AI needs to be hired, not installed, but only after going through a journey of awareness (AI-PLUG is the framework I work on), accepting small failures, and focusing on the potential that grows day by day.
By working on small projects that gradually increase in complexity, and by connecting the various Data Poaches over time, you’ll find yourself on the right track toward creating an ecosystem that, once mature, will generate continuous value. It will be a journey of constant learning and evolution.
It’s not just about introducing AI into your company. It’s about reinventing how you operate, make decisions, and envision the future. Every small experiment today is a seed for tomorrow’s innovation.
The real ROI of AI is the transformation itself: a journey that never ends, but one that will lead you to see possibilities where there were once only limits.
What do you think?
Kommentare