There’s a pattern I keep seeing in organizations that have committed real budget to AI initiatives. The technology works. The models perform. The demos impress executives. And yet — six months later — nothing has actually changed about how the organization creates value.

The problem isn’t technical. It’s that nobody is doing the product work.

The capability trap

When a company decides to “do AI,” what usually happens is this: a team gets formed, a platform gets chosen, models get trained or fine-tuned, and proof-of-concepts get built. Each of these steps is genuinely difficult and genuinely impressive.

But capability is not product. A model that can classify documents with 94% accuracy is a capability. A system that routes incoming customer requests to the right team, reduces resolution time by 30%, and integrates with three existing workflows — that’s a product.

The distance between these two things is enormous, and it’s not a distance that engineers or data scientists are trained to close. It requires a fundamentally different kind of thinking: who is the user, what problem are we solving for them, what does success look like in their daily work, and how do we measure whether we’ve achieved it?

Why operating models make this worse

In most large organizations, the AI team sits in technology. Product management, if it exists at all for internal capabilities, reports through a different structure. The people who understand the business context — operations managers, domain experts, customer-facing teams — are in yet another silo.

This structural separation means that the product questions don’t get asked until it’s too late. By the time someone says “but how will the underwriting team actually use this?”, the technical architecture has already been set, the training data has already been curated, and the assumptions about user behavior have already been baked in.

The operating model itself prevents good product thinking from happening at the right time.

What actually works

The organizations I’ve seen succeed with AI share a common trait: they treat AI initiatives as product problems from day one. This means a few specific things.

First, they start with the user’s workflow, not the model’s capabilities. Before anyone trains anything, they map the current decision-making process in detail — who decides what, based on what information, with what confidence level.

Second, they staff cross-functionally from the start. Not “we’ll bring in business stakeholders for feedback sessions,” but genuine co-creation where domain experts, engineers, and someone with product judgment sit together daily.

Third, they define success in business terms, not model metrics. F1 scores are necessary but insufficient. The question that matters is: did the human make a better decision, faster, with more confidence?

The uncomfortable implication

If your AI strategy has a product problem, the solution isn’t to hire more product managers. It’s to redesign how technology and business decisions get made together — which is an operating model question.

And that’s where things get genuinely hard, because operating model change means challenging existing power structures, reporting lines, and deeply held assumptions about who owns what.

But that’s the actual work. Everything else is expensive theater.