Bad Survey Data or Pure Guesswork? A Better Solution to Both
Customer Intelligence in the Age of AI — Part 6 of 6
For the better part of three decades, the customer experience industry has operated on a straightforward premise: ask customers what they think, measure their responses, and use the results to improve. The methodology evolved — from satisfaction scores to NPS to journey mapping to real-time feedback — but the foundation remained constant. Survey the customer. Analyse the data. Act on the findings.
I helped build that foundation. I was part of the team that co-created the Net Promoter Score methodology, and I spent years building a company around the conviction that measuring customer loyalty was the key to driving business performance. NPS was never wrong in what it set out to do. At its best, it gave companies a common language for customer loyalty that crossed functional boundaries, made sentiment legible to finance and the board, and created accountability for the customer experience in organisations that had previously treated it as a soft concern. It changed how companies thought about their customers, and for a period, that was genuinely valuable.
But the premise it rests on — that surveying a fraction of your customers tells you something reliable about all of them — has eroded to the point where pretending otherwise is no longer intellectually honest. And the honest acknowledgement is this: the methodology was always limited by the data it depended on. It measured what customers were willing to say, at the moment they were asked, to the company asking them. It was never designed to see around corners, to capture the customers who said nothing, or to predict what would happen next. Those limitations were always there. For a time, the value of the insight justified living with them. That calculation has changed.
What broke
Survey response rates have been declining for twenty years. In B2B contexts, two to eight percent response rates are now typical. The customers who do respond skew toward the satisfied, the engaged, and the vocal. The silent majority — the seventy to ninety percent who never respond — control the majority of revenue and represent the majority of risk. They're entirely absent from the data that informs strategy.
This isn't a fixable problem. Companies have tried shorter surveys, better timing, incentives, omnichannel distribution — the full repertoire of making a fundamentally broken thing slightly less broken. None of it has reversed the structural trend. As Part 1 of this series describes, survey response rates track a trajectory that points toward zero, not toward some comfortable new equilibrium. Customers have collectively decided that surveys aren't worth their time, and they're probably right — because decades of surveying has produced remarkably little visible change in how most companies actually operate. The feedback was collected. The dashboard was updated. The meeting was held. The next quarter began.
Meanwhile, the internal response to inadequate external data has been to substitute opinion. Health scores assembled from executive judgment. Account assessments based on the most recent interaction. Strategy driven by whoever has the strongest conviction in the room. Not because leaders are lazy or unintelligent, but because when the data is thin, human judgment fills the gap — and human judgment, however experienced, carries biases that are invisible to the person holding them. The combination of collapsing external data and unchecked internal opinion is how you end up with an industry that spends billions on customer intelligence and still can't reliably predict which customers will leave next quarter. If this were medicine, someone would have called for a review by now.
What's replacing it
What's emerging isn't a better survey. It's a fundamentally different approach to understanding customers — one that doesn't depend on the customer volunteering their opinion.
Every customer, even the ones who never respond to a survey, leaves a continuous trail of behavioural signals. How they use the product. How they engage with support. How their purchasing patterns evolve. How executive engagement deepens or thins. The frequency and tone of communications. The pace at which they adopt new capabilities. These signals aren't opinions. They're behaviour — and behaviour, it turns out, is a more reliable guide to what customers will do next than what they say when asked.
The first dimension of the shift is coverage. Survey-based intelligence covers the minority who respond; behavioural intelligence covers every customer, including and especially the ninety-four percent who never fill in a form. That isn't just a quantitative improvement. It changes whose experience shapes decisions — product roadmaps, resource allocation, intervention priorities — from the vocal and the engaged to the full customer base as it actually exists. The customers who have been running the company's revenue while remaining invisible to its measurement systems become visible for the first time.
The second dimension is direction. Survey data is inherently backward-looking — it tells you what a customer thought at the point they were asked. Behavioural signals, synthesised by models built specifically to find patterns that precede outcomes, are forward-looking. The question changes from "how did customers rate us last quarter?" to "which customers are on a trajectory toward churn in the next ninety days, and what is driving it?" That shift in tense — from past to future — is not a feature. It is the operating model change. Everything downstream of it is different.
The third dimension is specificity. A sentiment score tells you an account is at risk. Predictive intelligence tells you why the trajectory is deteriorating — which signals are driving it, how similar patterns have resolved in comparable accounts, and what intervention has the highest probability of changing the outcome. The difference between "this account is amber" and "this account's executive engagement has declined significantly over the past six weeks while usage in the core module has flattened, and accounts showing this pattern have churned at a high rate without a senior-level strategic conversation" is the difference between an alert and an instruction.
This is Customer AI in its most practical form — not a generic application of artificial intelligence to business data, but a specific capability built for a specific problem: generating predictive, continuously updated intelligence about every customer in the portfolio, regardless of whether that customer has ever filled out a survey.
What changes for leaders
The cumulative effect of these three shifts — coverage, direction, specificity — is not a set of incremental improvements to the existing model. It's a different experience of running a customer operation.
In the current model, leaders are always catching up. The quarterly review surfaces what happened. The post-mortem explains why a customer left. The escalation arrives when the relationship has already deteriorated past the point of easy intervention. The organisation is perpetually behind the curve of its own customer base, and the response — more heroic saves, more reactive investment, more senior attention deployed to crises that were visible in the data months earlier — consumes resources that could have been applied at a fraction of the cost when the signal first appeared.
In a model built on predictive customer intelligence, the operating posture reverses. Leaders know, continuously, which accounts are on a trajectory toward risk and which are showing the conditions for expansion. Resource allocation decisions are made prospectively rather than reactively. The conversation in the quarterly review shifts from "what happened with this account?" to "we identified this trajectory three months ago and here's what we did about it." Cross-functional teams work from a shared picture of account health rather than reconciling five competing narratives. The CFO looks at the customer portfolio with the same analytical rigour applied to the financial one.
Perhaps most importantly, the relationship between the organisation and its customers changes. A company that understands its customers well enough to reach out proactively — before the customer has decided there's a problem — is a different kind of partner than one that scrambles to respond when the renewal conversation reveals a decision made months earlier. That difference is felt by the customer, and it changes their assessment of the relationship in ways that compound over the life of the account.
The shift, in other words, is not from one set of metrics to another. It's from a posture of reaction to a posture of understanding. And understanding, when it's genuinely predictive, is the only form of customer intelligence that makes prevention possible rather than merely aspirational.
What this demands
None of this happens automatically. The technology exists and is maturing rapidly, but technology alone doesn't change how an organisation operates.
Leaders who want to make this shift need to be honest about what their current systems actually provide. If the answer is retrospective data on a minority of customers, supplemented by internal opinion — and in most organisations, that is the honest answer — then the gap between what they have and what they need is large enough to warrant a genuine reassessment of infrastructure, not incremental improvement. Kahneman, in his book Noise, put it with characteristic precision: "You may believe that you are subtler, more insightful and more nuanced than the linear caricature of your thinking provided by a formula. But, in fact, you're mostly noisier." Organisations built around survey scores, health score dashboards, and quarterly reviews need to develop comfort with predictive metrics, probabilistic assessments, and leading indicators that sometimes contradict the backward-looking data — and the human intuition — they've relied on.
They need to invest in prevention rather than response. As Part 5 of this series argues, the entire incentive structure of most customer-facing organisations rewards firefighting. Shifting toward prevention — where the greatest successes are crises that never happen — requires deliberate changes in what gets measured, recognised, and rewarded. The finance team needs to value the counterfactual. Leadership needs to celebrate the quiet stabilisation as well as the dramatic rescue.
And they need to move. The companies that build predictive customer intelligence into their operating model now will compound the advantage over time — better data improves models, better models enable better actions, better actions generate better outcomes and richer data. The organisations that start early will be operating at a level of sophistication that late movers will find genuinely difficult to replicate. This is not a technology cycle where waiting for the next version is a reasonable strategy. It's a capability cycle where the learning curve itself is the asset.
The oldest warning in the world
Throughout this series, I've drawn on observations from a collection of people who, on the surface, have nothing in common: a quantum physicist, a baseball catcher, two Nobel economists, a behavioural psychologist, a Victorian fictional detective, a media theorist, a quality management pioneer, and Lewis Carroll. It's not a group you'd invite to the same dinner party, though the conversation would be extraordinary.
What's striking is that they all arrived at the same punchline.
Niels Bohr — or possibly Yogi Berra, or possibly a Danish proverb, which is itself a useful data point about the reliability of attribution — observed that "prediction is very difficult, especially about the future." The redundancy is the joke: a prediction is, by definition, about the future. But the reason the line survives is that it encodes something people keep needing to be reminded of.
Galbraith, who had spent a career watching forecasters systematically overstate their own competence, put it differently: "There are two kinds of forecasters: those who don't know, and those who don't know they don't know." The customer success equivalent of the second category is the account manager who gives a confident green rating to an account they haven't spoken to in six weeks — and then is genuinely surprised when the renewal conversation goes badly.
Kahneman demonstrated through controlled experiments that humans systematically overestimate positive outcomes, trust their gut over their data, and cannot recognise that an eighty percent chance of success and a twenty percent chance of failure are the same proposition. Coase warned that if you torture the data long enough, it will confess — and the confession will be false. Deming attacked lagging metrics as a management tool. Holmes warned against theorising before you have data. Carroll's White Queen found it pitiable that Alice could only remember things that had already happened. Mark Twain — attributing the line to Disraeli, probably incorrectly — ranked statistics alongside lies and damned lies, then added the quiet admission that figures beguiled him most when he had the arranging of them himself. A quote about the unreliability of facts that is itself an unreliable fact.
Every domain of human expertise that has ever grappled seriously with prediction and data has independently produced the same warning: humans are bad at handling uncertainty, worse at predicting, and most dangerous when they're confident about both. That convergence isn't a quirky observation. It's a finding. Jokes persist for the same reason proverbs do — they encode something people keep proving true.
The customer experience industry has spent three decades proving it true as well. The industry didn't fail to see the risk. It succeeded, through considerable structural effort, in ensuring that no single person ever had to. Confident storytelling was mistaken for actual prediction. Gut instinct was dressed up in dashboards. The most optimistic interpretation available was treated as the most likely one. And every serious thinker who ever worked with data professionally would have told you — did tell you — that this is what humans do.
The value of a system that actually predicts — rather than rationalises — isn't just operational. It's corrective of something deeply wired into us. That's a stronger claim than "here's a better metric." It's the thing every smart person across history kept warning you about, and here, finally, is what addresses it: a system built not to flatter human judgment but to augment it with something humans cannot do at scale — synthesise signals, model trajectories, and calculate probabilities without the optimism bias, the confirmation bias, the inside view, or the HiPPO in the room.
Looking forward
In the myth of Orpheus and Eurydice, Orpheus is granted permission to lead his wife out of the underworld on a single condition: don't look back. He can't resist. He looks. She vanishes. The compulsion to verify the past — to check that what you had is still there — destroys the future.
There's something of Orpheus in organisations that keep returning to their survey data and their historical dashboards for reassurance, even as the ground shifts beneath them. The backward glance feels like prudence. It feels like diligence, like evidence-based management, like responsible stewardship of a business. It is actually the thing that costs you what you're trying to keep — because the customers whose decisions will determine next quarter's results have already made up their minds, and the dashboard you're consulting tells you about the quarter before last.
The tools to build a genuinely intelligence-driven customer operation exist today. The data, in most enterprises, is already being generated. The missing ingredient is not technology and it is not data. It is the decision to stop treating the customer base as something to be periodically surveyed and start treating it as something to be continuously understood — and to recognise that understanding, in this context, means looking forward rather than back.
That decision is harder than it sounds, because the backward glance is comfortable in a way that prediction is not. Prediction is explicit about its uncertainty. History feels like solid ground. But the ground only feels solid until the customer who seemed fine — who hadn't complained, who had a healthy score, whose account manager was confident — doesn't renew. At which point what felt like prudence reveals itself as the thing it always was: an expensive, systematic, and comfortably invisible way of not knowing what was coming.
The series began with the sound of silence — the ninety-four percent of customers who say nothing, whose decisions the current model cannot see. It ends here: with the tools to finally hear them, and with the question of whether organisations will choose to listen before those customers have already decided to leave.
I'm Richard Owen, founder and CEO of OCX Cognition. We build predictive customer analytics for companies who'd prefer to know which customers are at risk before those customers have already decided to leave.
This is Part 6 of 6 — the final article in a series on customer intelligence in the age of AI. Previously: Part 5 — Prevention Economics. Start from the beginning: Part 1 — The Sound of Silence.