Every CEO Is Being Asked About AI. Most Are Getting the Wrong Advice.

Why the AI consultancy industry is producing expensive proof of concepts that never ship - and what actually works.

A survey of 504 CEOs published by Dataiku and The Harris Poll in March 2025 found that 74% admit they could lose their job within two years if they don't deliver measurable AI business gains. In the US, the figure is 79%.

That number alone explains the urgency. Boards are not asking whether AI is relevant. They are asking when it will deliver results. And CEOs are under more pressure to show those results than at any point in the last decade of technology investment.

The industry has responded in the obvious way: a flood of AI consultancies, AI vendors and AI strategies. Every firm now sells AI. The pitches are polished. The demos are impressive.

And almost none of it is working.

The Numbers Nobody Wants to Talk About

According to IDC research published in partnership with Lenovo in 2025, 88% of AI proof of concepts never make it to production. For every 33 AI pilots launched, only 4 graduate to a working system.

PwC's 29th Annual Global CEO Survey (January 2026, 4,454 CEOs across 95 countries) found that 56% of CEOs say their company has seen neither higher revenues nor lower costs from AI. Only 12% - one in eight - report seeing both.

Deloitte's global survey of 1,854 senior executives (October 2025) found that while 85% of organisations increased AI investment in the past 12 months, only 15% report significant measurable ROI. The typical AI payback is 2 to 4 years, against an expected 7 to 12 months for traditional technology investments.

S&P Global reported that 42% of companies abandoned the majority of their AI initiatives before reaching production - up from 17% the year before.

McKinsey's 2025 State of AI survey found that while 88% of organisations report regular AI use in at least one function, only 7% have fully scaled AI across their business.

These are not fringe findings from obscure research firms. These are the largest consulting, research and advisory organisations in the world - and they are all saying the same thing. The money is flowing. The results are not.

Why AI Projects Actually Fail

The AI vendor narrative goes like this: here is a powerful model, feed it your data, watch the magic happen. The demo works beautifully. The board gets excited. The budget gets approved.

Then the project hits reality. And the reasons it stalls are rarely the ones the AI industry talks about.

The wrong problem. A RAND Corporation study on AI project failure found that misaligned objectives are a primary root cause. Companies ask "what can AI do?" instead of "what business outcome do we need?" They build something technically impressive that does not solve a problem anyone actually has. Or the problem is too vague to be solvable - "use AI to improve customer experience" is not a brief an engineering team can act on.

The wrong people. BCG and MIT Sloan research found that only about 10% of companies achieve significant financial benefit from AI. The common factor in the other 90% is not technical failure - it is organisational failure. The data science team builds a model in isolation. Nobody involves the people who would actually use it. When the tool is ready, the business is not. Change management is treated as an afterthought, or not considered at all.

The wrong foundation. The data the AI needs lives in six different systems. The CRM says one thing, the billing system says another, the finance team's spreadsheet says a third. The proof of concept was built on clean, curated sample data. Production means working with real data quality issues, real system integrations and real workflows that have grown organically over years.

No path from demo to production. The gap between a working prototype and a reliable production system is enormous. Monitoring, retraining, version control, integration with existing workflows - most AI consultancies deliver the demo and leave you to figure out the rest.

Notice the pattern. Only one of these four is a purely technical problem. The other three are about understanding the business, the people and the systems before you start building. Most firms skip that work because it is slower and less exciting than showing a demo.

What a Working AI Implementation Requires

When you look at the organisations that successfully deploy AI - McKinsey's "high performers," the 7% that have scaled AI across their business - they share common traits. None of them are about having better algorithms.

A clear business problem. Not "implement AI" but "reduce customer onboarding time from 14 days to 3" or "automate the monthly board reporting that takes two people a week." Specific enough to measure, important enough to fund, and defined by the business - not by the technology team.

Cross-functional involvement from day one. The people who will use the tool are in the room when it is designed. Their workflows are understood before anyone writes code. Their objections are addressed before launch, not after.

Connected data and systems. Not a one-off data cleaning exercise, but reliable pipelines where data flows automatically from the systems that generate it to the systems that consume it. The AI tool talks to the systems your team already uses - not another isolated dashboard nobody checks.

Processes designed for the output. An AI system that generates insights nobody acts on is an expensive dashboard. The workflows that consume AI output need to be designed alongside the AI itself.

A team that can run it. If the consultancy that built it is the only team that understands it, you have not implemented AI. You have rented it. Documentation, training and handover are not optional extras - they are the difference between a working system and an expensive dependency.

What the Board Actually Wants

When a board asks about AI, they are not asking for a proof of concept. They are asking for business results: lower costs, higher revenue, faster decisions, competitive advantage. The AI is a means to an end.

The CEOs in the Dataiku survey are not afraid of AI itself. They are afraid of spending millions on AI initiatives that produce impressive presentations and no measurable business impact. They are afraid of being the CEO who bet big on AI and has nothing to show for it.

70% of the CEOs surveyed believe a peer CEO will be ousted by the end of 2025 due to a failed AI strategy. This is not hypothetical anxiety. This is a concrete expectation that failure to deliver will have real consequences.

What Actually Works

The approach that works is the one that the AI industry does not want to sell, because it is slower and less glamorous than a demo.

You start with the business problem - specific, measurable, defined by the people who will live with the result. You understand the systems, the data and the workflows that surround it. You involve the users from day one. You connect what needs to be connected. Then you build the AI - on a foundation that works, integrated into processes that exist, with a team that can run it without you.

When you do it in this order, something interesting happens. The AI works on the first attempt - because it was designed for the real business, not a clean-room demo.

The Questions to Ask Your AI Vendor

If you are evaluating an AI consultancy, AI vendor or AI strategy, ask these questions:

"How will you define the business problem before you start building?" If the answer is vague or skips straight to the technology, the project will solve the wrong problem.

"Who from our team will be involved and when?" If it is just the IT team, or nobody until the handover, adoption will fail.

"What happens between the demo and production?" If the answer does not include integrating with your existing systems, designing workflows around the output and training your team to run it independently - you are buying a proof of concept.

"What do we own when you leave?" If the answer is unclear, you are renting AI, not implementing it.

What This Means for You

If your board is asking about AI - and statistically, they are - the honest answer is not "we need an AI strategy." The honest answer is: we need to understand our business problem clearly, know whether our people, data and systems are ready, and fix what is not before we start building.

That is a harder conversation than buying a demo. But it is the conversation that separates the organisations that get real results from the ones that get an impressive presentation and nothing else.

Find Out If You're AI-ReadyTalk to Us About AI

Sources

  • Dataiku / Harris Poll, "Global AI Confessions Report: CEO Edition," March 2025. 504 CEOs surveyed.
  • IDC / Lenovo, CIO Playbook 2025. Reported via CIO.com, March 2025.
  • PwC, 29th Annual Global CEO Survey, January 2026. 4,454 CEOs across 95 countries.
  • Deloitte Global, "AI ROI: The Paradox of Rising Investment and Elusive Returns," October 2025. 1,854 executives surveyed.
  • S&P Global Market Intelligence, "Voice of the Enterprise: AI & Machine Learning, Use Cases 2025."
  • McKinsey & Company, "The State of AI," November 2025. 1,993 participants across 105 nations.

Is Your Business AI-Ready?

Ten questions. Two minutes. Find out whether your systems and data can support AI that actually delivers - or whether the foundation needs work first.

Take the Quick AssessmentOr Talk to Us Directly