See What's Inside

Knowledge Base

The Sound of Silence

Customer Intelligence in the Age of AI — Part 1 of 6


Most companies believe they understand their customers. They run surveys, track NPS scores, review dashboards, hold quarterly business reviews. The infrastructure of understanding is elaborate and expensive. It is also, by any honest accounting, almost entirely misleading.

Here's the arithmetic. In a typical B2B enterprise, somewhere between five and thirty percent of customers respond to experience surveys, depending on channel and context. One analysis of actual customer survey data showed averages closer to six percent. The CX industry has, in other words, built a cathedral of insight on a foundation that involves not hearing from ninety-four percent of the congregation. Pew (no pun intended) Research documented public survey response rates dropping from thirty-six percent in 1997 to six percent by 2018 — an eighty-three percent collapse in two decades. The USDA's crop production surveys fell from the mid-eighties to forty-six percent, dropping below fifty percent for the first time between 2019 and 2024. I don't know about you, but I stopped filling in crop production surveys years ago. The Bureau of Labor Statistics — the government, armed with subpoena power and a printing press — saw its employment survey responses fall from around sixty percent to below forty-five, and even they can't reverse the trend. These aren't anomalies in one domain. This is a structural trend playing out across every category of survey, in every sector, in every geography. The trend line points in one direction, and it isn't up.

The iceberg was not a black swan. It was a white iceberg, in a well-documented shipping lane, in a month that ends in 'r'. The survey response decline has been the same kind of iceberg — visible for two decades, documented in every major research setting, and treated with the same institutional surprise every time another company discovers it applies to them.

And the pandemic made it worse — not as a temporary disruption, but as a structural break. Post-pandemic recovery of response rates has not materialised. What looked like an accelerant turned out to be a ratchet.

The underlying mechanism is a classic tragedy of the commons. As the marginal cost of sending a survey has fallen toward zero — no printing, no postage, no interviewers — the volume of surveys has grown exponentially. Every product team, every service department, every transaction, every app generates a feedback request. The cost to the sender is negligible. But from the customer's perspective, their time is not free, and it's being consumed as though it were. Customers still allocate a finite portion of their attention to responding, but that fixed budget is now spread across an ever-growing number of demands — not just surveys, but social media, review platforms, support chatbots, and a dozen other channels competing for the same minutes. When everyone treats a shared resource as free, the resource degrades. That's precisely what's happened to survey response rates, and it's why the trend line points toward zero rather than toward some comfortable new equilibrium where the industry can stop worrying.

The erosion of trust compounds the problem. Many customers — particularly in B2B — have learned through bitter repetition that their feedback rarely leads to anything they can see. The implicit contract that once made surveys work — you ask, I answer, something improves — has broken down. So they've stopped answering. You can hardly blame them. They held up their end for years.

The result is silence. Not the comfortable silence of contentment, but the silence of disengagement. And that silence is where most of your revenue lives.

There's a further problem that most companies don't even recognise they have, which is impressive given how foundational it is. The standard assumption — often made without anyone consciously articulating it — is that the customers who do respond to surveys are broadly representative of the ones who don't. That the NPS score generated by six or fifteen percent of your base is indicative of the whole. Research by Bain & Company and by OCX Cognition suggests otherwise. Non-respondents consistently show significantly lower NPS than respondents. Detractors, it turns out, don't fill in surveys — they just leave. Which means that survey-based NPS scores aren't merely incomplete; they're systematically biased toward a rosier picture than reality warrants, and are routinely presented as though they represent the health of the entire customer base. The executive team gathers around a number that is both precise and wrong — a combination that should worry more people than it does.

In consumer businesses with large, relatively homogeneous populations, you might argue that statistical sampling techniques can partially compensate. In heterogeneous B2B environments — where customer size, complexity, use case, and contract structure vary enormously — you can't even rely on building a sample that's representative of the whole. Yet NPS is very typically presented in a way that implies extrapolation to the full base, often by people who don't realise they're relying on that assumption. The number on the slide looks precise. The methodology behind it is anything but.


The noise problem

If silence were the only issue, at least the diagnosis would be clean: we don't know enough. But the reality inside most organisations is considerably worse than ignorance, because the void left by missing customer data gets filled — enthusiastically, confidently, and incorrectly — with opinion.

Every executive has a theory about what customers want. Sales leaders extrapolate from their most recent conversations. Product teams project their own assumptions about what drives value. Account managers form deep convictions based on whichever accounts happen to demand the most attention. Board members arrive with pattern-matched instincts from their last three companies. Everyone has a view. Almost no one has evidence.

None of this is malicious. Most of it is well-intentioned. And almost all of it is wrong. As Daniel Kahneman once observed: "Executives like to listen to their gut. And most of them like what they hear." The research on why they shouldn't is sobering, though not as sobering as the consistency with which it gets ignored.

The core problem is optimism bias — what neuroscientist Tali Sharot described as one of the most consistent, prevalent, and robust biases documented in psychology. Humans systematically overestimate the likelihood of positive outcomes and underestimate the likelihood of negative ones. This is hardwired, not situational. And in customer-facing roles, the incentive structure amplifies it rather than correcting for it. Research on analyst forecast accuracy found that optimism bias is specifically motivated by conflicts of interest arising from compensation — analysts receive rewards for issuing optimistic forecasts. The parallel to sales and customer success compensation tied to renewals and expansion is exact. We have built organisations where the people closest to the customer are financially incentivised to believe the customer is happy, and then we ask those same people to tell us how the customer is doing. The results are about as reliable as you'd expect.

Confirmation bias compounds the damage over time. Studies of forecasting accuracy found that each new piece of information gets filtered through an existing positive frame, inflating assessments further. In practice, this means the longer an account manager has a positive relationship with a customer, the more distorted their health assessment is likely to become. They're not lying. They're reasoning from the relationship rather than from the base rate of similar accounts — what Kahneman and Lovallo identified as the "inside view" problem, where individuals anchor predictions on plans and scenarios rather than on historical outcomes.

The organisational consequences are measurable. A study of nearly four hundred B2B enterprise organisations found that once human bias, errors, and manual process issues are removed, satisfaction with forecasting accuracy jumps to seventy-six percent — implying that human factors, not data quality or model design, are the primary drag on accuracy. The data isn't the problem. The people interpreting it are.

And then there are health scores — the industry's attempt to systematise customer assessment, which in practice has managed to systematise the bias instead. In many organisations, health scores are populated directly by customer-facing team members. The account manager or customer success manager enters their assessment of how the account is doing, the system wraps it in a colour-coded dashboard, and it gets presented to leadership as data. It isn't data. It's the same cognitively biased opinion that would have been offered in a meeting, except now it has the institutional authority of being in the system. The industry spent considerable time and money building software to automate guesswork, and then congratulated itself on the innovation.

The alternative approach — letting customer-facing teams select and calibrate the metrics that compose the health score — introduces a different set of errors. Teams naturally gravitate toward metrics they can influence and that reflect well on their performance. They calibrate thresholds that their accounts can meet. The resulting score may correlate nicely with the team's self-assessment but bear little relationship to what actually predicts customer behaviour. In both cases, the health score becomes a mirror of the organisation's assumptions rather than a window into the customer's reality. Ronald Coase, the Nobel economist, put it well: "If you torture the data long enough, it will confess." Health scores, as typically constructed, are a confession extracted under duress — the inputs are selected and calibrated until they produce a picture the organisation finds plausible. The result is errors laundered through a system and presented as fact.

Galbraith, with his characteristic economy, identified the deeper mechanism: "The conventional view serves to protect us from the painful job of thinking." The health score dashboard is, in this reading, the conventional view — not a tool for understanding customers but a tool for avoiding the discomfort of not understanding them.

And the bias runs in both directions. Optimism bias overstates account health to avoid difficult conversations and protect relationships. Sandbagging understates it to set low bars and beat them. Neither produces signal. Both produce noise. The one thing they have in common is that the customer's actual situation is beside the point.


When silence meets noise

The real damage happens where these two problems intersect. Silence creates the vacuum; noise fills it with false confidence.

Consider how this plays out in practice. A company loses a significant account. In the post-mortem, the customer success team identifies "lack of engagement" as the root cause. But the lack of engagement was itself invisible — the account hadn't responded to surveys, hadn't raised support tickets, and had quietly reduced usage over eight months. Meanwhile, internal stakeholders had been telling themselves the account was stable because no one was complaining. The account manager, anchored in the inside view of a formerly good relationship, had rated health as green. The absence of signal was interpreted as the presence of satisfaction, and the optimism bias of the people closest to the account went unchallenged because there was no data to challenge it with. The post-mortem produces recommendations. A new process is introduced. The same thing happens again the following quarter.

This is a pattern so common it barely registers as a failure. Companies have normalised the practice of making consequential decisions — where to invest, which accounts to prioritise, what to build, how to allocate resources — based on the thinnest sliver of actual customer intelligence, supplemented by the thickest layer of cognitively biased internal assumption.

You could think of it as corporate decision-making by séance. You're channelling the customer's voice through intermediaries who each have their own agenda, their own recency bias, their own neurologically hardwired tendency to see things as more positive than they are. The customer isn't in the room. A curated, optimistically distorted version of the customer is in the room. And nobody has thought to ask whether the medium might be making things up.


The economics of being wrong

This would matter less if the stakes were low. They're not.

When you misread your customer portfolio — and with single-digit response rates and structurally biased human assessment, you are almost certainly misreading it — the errors compound in specific, expensive ways. Retention investments go to accounts that were never at risk while genuinely deteriorating relationships get no attention. Expansion efforts target customers who aren't ready while overlooking accounts where the appetite for growth is real but unexpressed. Product roadmaps respond to the vocal minority while ignoring the needs of the revenue majority. The company optimises with great energy and discipline for a version of reality that doesn't exist.

The performance gap between human-judgment-based assessment and signal-based approaches is empirically large. Technology sector research shows machine-learning-based forecasting achieving eighty-eight percent accuracy compared to sixty-four percent with traditional human approaches. Forrester found that signal-based intelligence improves predictive accuracy by up to forty percent compared to methods relying on lagging indicators and subjective input. Yet Gartner concluded that over ninety percent of B2B enterprise sales organisations still rely primarily on intuition rather than advanced analytics. Ninety percent of organisations, using a method that is twenty-four percentage points worse, while the better alternative sits on the shelf. The market is not, it would appear, efficient.

That gap — between what's possible and what's practiced — represents an extraordinary amount of value left on the table. Or more precisely, an extraordinary amount of value walking out the door while the people responsible for keeping it are busy being wrong with confidence.


What would actually change this

The gap isn't effort. Most organisations try hard to understand their customers. The gap is between the intelligence they need and the intelligence their current systems can produce.

What's needed isn't more surveys with better response rates. That ship sailed years ago, and it isn't coming back into port. Nor is the answer bolting an AI engine onto survey data — which is the equivalent of fitting a glass touchscreen control panel to the front of a horse-drawn carriage. The panel looks impressive and the lights are excellent, but it doesn't change the fact that you're still holding a buggy whip. The underlying data is sparse, biased, and structurally deteriorating. Making the dashboard prettier doesn't make the inputs less wrong. And joining operational data to that already inaccurate and fragmented data set isn't a solution either — it's adding another horse to a conveyance that needs an engine.

What's needed is the ability to generate a complete, continuously updated view of every customer — including and especially the ones who never respond, never call, and never complain — by interpreting the signals they do leave behind. Usage patterns, engagement rhythms, commercial behaviour, support interactions, the cadence and tone of communications. Individually, each signal is partial. Synthesised intelligently, they tell you what no survey and no account manager's intuition can: what's likely to happen next.

This is where AI applied specifically to customer intelligence — what we think of as Customer AI — starts to matter. Not as a technology story, but as the solution to a very specific business problem: you cannot manage a customer portfolio you cannot see, and you cannot see it through the combination of collapsed surveys and cognitively biased human judgment that constitutes the current operating model. The industry has spent two decades polishing the instruments while the orchestra has left the building.

The shift is from silence and noise to signal. From a world where the vast majority of customers are invisible and the internal view of the rest is systematically distorted, to one where every customer has a continuously updated, predictively scored profile built from the behavioural evidence they actually produce.


Why the shift is harder than it sounds

The barriers to making this change are real, and they're mostly human rather than technical.

The first is an uncomfortable relationship with prediction itself. Predictive models operate with codified levels of accuracy — a system might tell you it's seventy-five or eighty percent confident that an account is at risk. For people unfamiliar with using probabilistic intelligence as a basis for business decisions, that number feels inadequate. Eighty percent accurate sounds like twenty percent wrong. Kahneman captured this perfectly: "An investment said to have an 80% chance of success sounds far more attractive than one with a 20% chance of failure. The mind can't easily recognize that they are the same." The framing changes how people receive identical information.

The irony is that the same person who rejects an eighty percent accurate prediction will place enormous faith in their own estimate of account health — an estimate that research suggests is closer to sixty percent accurate and systematically biased in the direction of optimism. We don't reject our own judgment for being inaccurate because we don't experience it as a probability. We experience it as knowledge. A prediction model makes its uncertainty explicit; human judgment buries its uncertainty beneath confidence. The model gets held to a standard that the person holding it has never once applied to themselves.

This is largely a familiarity problem. Most business leaders are not trained in probabilistic reasoning. They're trained in conviction. The culture of most organisations rewards decisive judgment, not calibrated uncertainty. So a system that says "there's a seventy percent chance this account churns in the next six months" feels less trustworthy than an account manager who says "I think we're fine" — even though the first statement is more honest, more useful, and more likely to be right. There is something deeply human about preferring a confident wrong answer to an honest uncertain one.

The second barrier is cultural, and it runs deeper. The biases described earlier in this piece — optimism, confirmation, the inside view — aren't surface-level habits that people shed when presented with better data. They're deeply encoded in how customer-facing professionals build their identity and self-worth. An account manager who has spent years cultivating relationships and trusting their instincts about accounts is being told, in effect, that their judgment is structurally unreliable. That is not a message people receive well, regardless of how diplomatically it's delivered. And to be fair, nobody goes into account management hoping to be replaced by a probability distribution.

Data that challenges existing beliefs tends to be resisted, reinterpreted, or dismissed — not because people are unintelligent but because they're human. A prediction that contradicts the account team's assessment doesn't feel like better intelligence. It feels like an accusation. And if leadership doesn't actively and visibly support the shift — championing the data even when it's uncomfortable, creating space for teams to act on predictions they didn't generate, redesigning incentives to reward prevention rather than heroics — the organisation will default to the old model. Not because the old model works, but because it's familiar and flattering.

Making this shift takes leadership in the genuine sense of the word: the willingness to push through resistance that is rooted not in logic but in fundamental human nature. The technology is the easy part. The hard part is getting an organisation to trust a system that tells them things they don't want to hear, and to act on that information before the evidence becomes so overwhelming that even the most optimistic account manager can't ignore it — by which point, of course, it's too late.

The costs of not making the shift don't show up as a line item. They show up as the renewal you didn't see coming, the expansion you didn't pursue, and the competitor who saw your customer's trajectory before you did. 


I'm Richard Owen, founder and CEO of OCX Cognition. We build predictive customer analytics for companies who'd prefer to know which customers are at risk before those customers have already decided to leave.

This is the first article in a six-part series on customer intelligence in the age of AI. Next: Part 2 — The Cost of Looking Backwards.