Two people say “AI isn't impressive” and mean completely different things. The first tried ChatGPT in 2022, got told something confidently wrong, and moved on. The second is deep enough inside these systems that the marketing hype reads as flat compared to what they are actually watching the tools do. Both sentences are word-for-word identical. They describe opposite realities.
Then there is the middle, where most of us live: awed by the demo, occasionally stunned, not quite sure what to make of any of it. Arthur C. Clarke's line about sufficiently advanced technology being indistinguishable from magic applies more often than it should. Live meeting transcripts appear on the screen as people speak. That has not gotten old.
These are not three different opinions about the same technology. They are three different technologies being described with the same word.
The 2022 version is not the 2024 version is not the 2026 version, and the version someone uses for twenty minutes a week is not the version someone is building a product on daily. Every one of those people has a coherent, reasonable view. None of them are looking at the same thing.
Walk into most SMBs right now and here is the room.
The CEO tried Claude a few times, got impressed, put AI on the roadmap, and now wants to know why the rollout is stuck.
The marketing lead is running ChatGPT in a browser tab, getting useful drafts every day, and cannot understand why everyone else is overthinking this.
The IT lead read a post about hallucinations eighteen months ago, watched a demo fail last quarter, and considers the whole category premature.
The most senior operator is already pairing with an AI assistant on everything they do, knows exactly where it helps and where it breaks, and is quietly running circles around the rest of the team.
Those four people are in the same meeting talking about “AI.” None of them are looking at the same thing. The CEO is imagining a strategic layer. The marketing lead is describing a writing assistant. The IT lead is picturing an unreliable demo. The senior operator is using a pair programmer. Every AI decision the company tries to make runs straight into that wall, and almost no one names it, because naming it would mean admitting the conversation was never actually about the technology.
The usual explanation for stuck AI adoption is some version of “we need better training,” or “the tools aren't ready yet,” or “our data isn't clean enough.” Those explanations are real, but they miss the binding constraint. The people making decisions about AI have not built a shared reference point, so every decision is downstream of a hidden disagreement about what the thing even is.
That shows up as procurement cycles that stall because nobody can articulate what they are buying. Pilots that run for a quarter and produce no conclusion anyone agrees on. Training sessions where half the room feels patronized and the other half still does not know where to start Monday morning. Strategy documents written by whoever has the most recent demo fresh in their head.
The companies getting AI to land share one trait. Everyone making the decisions has actually used the same AI workflows on the same real work recently enough to be looking at the same picture. That shared picture predicts whether AI lands, more than the tools or the budget do. One of our clients installed this pattern in two weeks, and her leadership team now has a shared vocabulary for what AI can and cannot do.
All-hands trainings, vendor demos, and consultant slide decks produce parallel exposure. Everyone in the room sees something. Almost no one sees the same thing.
A shared reference point is the same small set of AI workflows, running on the same real work, that the leadership team and the operator layer can both point at and describe the same way. Everyone has felt the same friction. Everyone has watched the same thing work. Everyone knows in their body where the edge is. When someone asks “can AI do this?” the answer stops being a debate about capabilities in the abstract and becomes a reference to something the team has already run.
That reference point resets the whole conversation. The CEO stops imagining. The IT lead stops predicting from an old demo. The marketing lead stops under-describing what they are already doing. The senior operator stops carrying the whole map alone. Everyone is looking at the same thing, which means the next decision can actually be made.
There is no shortcut to this. Reading, demos, and strategy decks leave everyone in the room with their own private version of what AI is. The only thing that produces a common version is implementation: real AI workflows running on real work, with the people who matter watching, using, and correcting them.
We have written before about why the workflow is the part that compounds, not the model. The reference-point problem is the same observation looking from the other side. Workflow architecture compounds because it is the only thing that produces shared understanding inside an organization, and shared understanding is what strategy runs on.
Until a company has installed that, it is trying to make AI decisions off five different mental models of what AI is. The decisions will not land, because the decisions were never really about AI. They were about a disagreement nobody named.
The fastest way to unstick an AI strategy is to stop talking about AI strategy and run one small, real AI workflow inside the actual business, with the people who care watching it happen. That is where the debate ends and the real work starts.