The Tyranny of the Org Chart
Customer Intelligence in the Age of AI — Part 3 of 6
If you ask a CFO about a customer, they'll tell you about revenue, contract value, and payment history. Ask the head of sales and you'll hear about pipeline, expansion potential, and the last executive conversation. Ask customer success and you'll get a health score, an adoption summary, and a list of open issues. Ask the product team and you'll learn about feature usage, support tickets, and what the customer requested in the last roadmap session.
Each answer is accurate. None of them is the customer.
We had a Chief Customer Officer, a Chief Revenue Officer, a Chief Experience Officer, and a Chief Digital Officer. The customer was unaware of any of them.
This is the silo problem, and it's more pervasive than most leaders realise — not because the silos are hidden, but because the software industry has spent two decades building increasingly specialised tools that make the fragmentation feel natural. There's a platform for CRM, a platform for customer experience, a platform for customer success, a platform for revenue operations, a platform for support. Each is sophisticated, well-designed, and genuinely useful within its domain. And collectively, they ensure that no one in the organisation sees the customer whole. It's as though you'd asked five blind men to describe an elephant and then bought each of them a very expensive magnifying glass.
The software industry's contribution to the problem
This isn't an accident of poor planning. It's a consequence of how enterprise software markets evolve.
Vendors specialise because specialisation is how you build products, win deals, and create defensible market positions. Salesforce owns the pipeline. Gainsight owns customer success. Qualtrics owns experience measurement. Each platform captures a specific dimension of the customer relationship, optimises for a specific workflow, and serves a specific buyer inside the organisation. The vendor gets a defensible market. The buyer gets a solution to their particular problem. Everyone is happy, except possibly the customer, who wasn't consulted.
The result is entirely predictable. Every team sees the business through the window their software provides. Sales sees accounts through the lens of commercial activity. Customer success sees them through health scores and engagement metrics. CX teams see them through survey responses and sentiment data. Each view is partial by design — the software was built to serve a function, not to represent a customer. Billy Beane's insight in baseball wasn't really about statistics. It was that the scouts, the coaches, and the front office had each built a perfectly coherent picture of a player who didn't exist. The same dynamic plays out in customer management every day.
Leaders are then left to do the integration in their heads. Sherlock Holmes observed that "it is a capital mistake to theorize before one has data" because "insensibly one begins to twist facts to suit theories, instead of theories to suit facts." Each team arrives at a cross-functional review with its theory — shaped by the data its own system provides — and interprets whatever it hears through that frame. This works tolerably well for a handful of strategic accounts. It breaks down entirely at scale. Pierre Bourdieu observed that "every established order tends to make its own entirely arbitrary system seem entirely natural." He was describing social structures. He could equally have been describing the enterprise software stack.
Seeing through the lens of silos
The deeper problem isn't operational. It's cognitive.
When your systems are structured around internal functions, your leadership team inevitably thinks about the business in terms of those functions. Conversations orient around sales performance, support efficiency, product adoption, customer satisfaction — each as a separate domain with its own metrics, its own targets, its own quarterly review. Every VP is measured on their function's performance. The company's performance, it turns out, lives in the gaps between functions. Nobody is measured on the gaps.
The customer, meanwhile, experiences your company as a single entity. They don't distinguish between a sales interaction and a support interaction and a product experience. Their assessment of whether the relationship is working — and whether they'll renew, expand, or leave — is a holistic judgment that synthesises everything. The gap between how the customer experiences you and how you understand the customer is where value destruction happens.
The navigation officer saw ice. The captain saw schedule. The shipping company saw quarterly earnings. Between them, they had a complete picture of everything except the iceberg. The same structural failure plays out in customer management. Each function reports its own corner of reality accurately. The customer, meanwhile, has drawn a different conclusion from the composite picture that none of them can see.
A customer who has high product adoption but eroding executive engagement is at risk, even though the product team's metrics look healthy. A customer whose support tickets are resolved quickly but whose business outcomes aren't being achieved is dissatisfied, even though the support dashboard shows green. A customer who scores well on NPS but has quietly started evaluating alternatives is about to churn, even though the CX team considers them a promoter.
Sales promised it. Operations delivered something adjacent to it. Support apologized for it. Finance invoiced for the original it. The customer quietly left.
None of these contradictions are visible when you look at the business through functional lenses. Each silo reports its own reality. The composite reality — the one the customer is actually living — exists nowhere in the organisation. And because most sophisticated customer problems are inherently cross-functional — a churn risk that stems from a combination of product underperformance, a stalled executive relationship, and a competitor's recently improved offering — they are precisely the problems that the siloed model is structurally unable to surface or address. The most important customer problems are the ones that fall through the gaps, and the gaps are where nobody is looking.
The measurement that confirms the silo
The fragmentation doesn't stop at how teams view the customer. It runs straight through how most companies measure them as well.
Transactional surveys — post-interaction questionnaires triggered after a support call, a sales meeting, an onboarding session — are among the most widely used tools in the CX industry. They're also among the most reliably misleading, for a reason that follows directly from everything above. A transactional survey captures a customer's experience of one narrow interaction, evaluated in isolation from everything else. The support call was resolved promptly, the onboarding session was well run, the sales conversation was professional. Each interaction, measured on its own terms, looks fine. The scores come back high. Someone builds a slide.
What the transactional survey cannot capture — by design, because it wasn't designed to — is how the customer feels about the relationship overall. Whether the product is actually delivering on what was promised. Whether they trust the company to understand their business. Whether the cumulative weight of small frictions across every touchpoint has quietly shifted their view of the partnership. These things don't appear in a post-support CSAT score. They live in the relationship, and the relationship exists at a level of abstraction that no individual transactional survey reaches.
The effect is that each function measures its own performance in its own controlled context and produces scores that reflect well on it. It's a bit like a restaurant that surveys guests exclusively about the bread basket, and then presents the results as evidence of an exceptional dining experience. The bread basket scores were excellent. The kitchen may have been on fire.
Relationship measures — periodic assessments of overall loyalty and sentiment that ask the customer to evaluate the partnership as a whole — exist precisely to avoid this problem. But they too are limited by the survey dynamics discussed in Part 1 of this series. The better alternative is not to choose between transactional and relationship measures, both of which are survey-dependent and both of which inherit all the structural limitations of surveys. It's to build customer intelligence that doesn't depend on the customer volunteering their sentiment in the first place.
The Customer 360 trap — and the vendor who will helpfully solve it
The instinct to fix the fragmentation is widespread and, at a high level, correct. Most organisations of any sophistication have at some point attempted to build a unified customer view — a Customer 360, in the vocabulary that's taken hold. Gartner estimates that by 2026, eighty percent of organisations pursuing that vision will abandon it — because it fails to deliver usable intelligence, relies on collection methods that are becoming obsolete, or runs into data governance constraints that make the aggregation difficult to sustain. Only fourteen percent of organisations have achieved a functioning Customer 360 view despite it being among the most commonly stated CX objectives.
The vendors, naturally, have a solution. Every major platform in the ecosystem has at some point positioned itself as the grand unifying layer — the one system that will finally bring the fragmented picture together. CRM vendors offer customer data platforms. CX vendors offer journey orchestration. Customer success vendors offer revenue intelligence. Each, sensing the integration problem their own specialisation helped create, arrives with an offer to solve it — using, as it happens, their existing product, extended with a new module or a rebranded capability.
The problem is that a vendor's design DNA reflects the function it was built to serve. A CRM vendor's architecture is optimised for managing pipeline. A customer success platform's logic is built around health scores and renewal workflows. These are deeply held structural commitments, not surface-level features, and they don't disappear because the vendor has added an "insights" tab or repackaged their product as a platform. I might think Tesla makes a remarkable car. I wouldn't necessarily want them building me a flamethrower. Though admittedly, on reflection, that example may not be as hypothetical as I intended.
The vendors who have actually won the data aggregation battle tend to be the ones who were honest about what they were building from the start. Snowflake, and platforms like it, succeeded precisely because they made no pretence of being a CRM or a customer success tool or a CX platform. They are infrastructure — purpose-built repositories designed to consolidate data from multiple systems in a form that makes it accessible for analysis. For the aggregation problem specifically, this is the right tool, because it wasn't designed for any one function and therefore doesn't import any one function's biases or constraints.
But a repository, however well designed, is still not intelligence. Snowflake can hold every signal your customers generate. It cannot tell you which of those signals predict churn, which precede expansion, and what the current trajectory of each account implies about what will happen next quarter. The data is present. The reasoning is not. What most Customer 360 initiatives build — whether on a vendor platform or a data warehouse — is a basis for analysis. The analysis itself remains a manual task performed by whoever is looking at the screen, with all the cognitive limitations that implies.
What intelligence actually looks like
The necessary step that most organisations skip is the refinement of the data into the signals that have proven linkage to customer behaviour and genuine predictive power. Not twenty metrics. Not thirty. The specific combination of indicators that, in practice, reliably precede the outcomes that matter — churn, expansion, changes in engagement — validated against historical outcomes rather than selected by committee consensus.
This is what Customer AI is actually for, applied to this specific problem. Not another layer of data aggregation, and emphatically not another vendor claiming to unify the picture through the lens of their existing product. A reasoning engine that examines the signals a company already generates and does the analytical work that a human reviewing a dashboard cannot do at scale: determining which patterns are predictive, weighting them appropriately, and synthesising them into a continuously updated assessment of where each customer relationship stands and where it's heading.
The output isn't twenty metrics on a single screen. It's a single view — a coherent, predictively grounded assessment of each customer that is both complete and comprehensible. One that a CFO, a customer success manager, and a product leader can each look at from their own perspective and understand without having to reconcile five different systems or resolve five different narratives about the same account.
Why the shared view changes behaviour
That single, shared view matters beyond its analytical value, because of what it does to the way people work together.
When different functions are operating from different data — their own systems, their own metrics, their own version of account reality — cross-functional alignment requires debate. The sales team's understanding of a customer's health contradicts the customer success team's understanding, which contradicts the product team's understanding, and the quarterly review becomes a negotiation about whose version of reality to believe. Decision-makers in organisations with fragmented data report spending roughly thirty percent of their working week simply locating the right information — time spent not on the customer but on the internal argument about the customer.
When every function is working from the same predictively grounded customer view, the nature of those conversations shifts. The debate about what's happening is replaced by a conversation about what to do. Common goals become easier to identify because a shared assessment of risk and opportunity makes the stakes visible to everyone simultaneously. The account manager who sees a high churn probability, the product leader who sees low adoption in a key module, and the CS manager who sees declining executive engagement are all looking at the same underlying signal — and the conversation that follows is about coordinated response rather than competing interpretations.
The research on cross-functional collaboration and customer outcomes is directionally consistent even if the causal links are difficult to isolate precisely: customer problems that require cross-functional resolution — which is to say, most of the important ones — are significantly harder to resolve in fragmented organisations, and the customer experience penalty shows up in the metrics that matter. It would be surprising if it didn't. When the functions that collectively shape the customer's experience cannot agree on what the customer's experience actually is, that disagreement has consequences that the customer, though absent from the room, ends up paying for.
The unified customer view as competitive advantage
The opportunity, then, is not to build a better data warehouse, and it is not to buy the platform that promises to unify everything through the lens of its existing product. It's to build an intelligence layer that sits above the data infrastructure — whatever form that takes — and turns the signals multiple systems already generate into something each function can act on without having to perform their own interpretation first.
The organisations that get this right operate differently from those that don't. Not because they have more data, but because they've done the analytical work that turns data into intelligence — and because that intelligence is shared, it creates the common frame of reference that makes cross-functional action possible without requiring cross-functional argument first.
The customer's interest in being understood holistically has, for a long time, been nobody's product roadmap priority. The enterprise software industry was structured to prevent it, not out of malice but because specialisation is how software companies build their businesses — and because the vendors who saw the integration problem as an opportunity discovered, one after another, that a tool optimised for one function cannot become a genuinely neutral unifying layer simply by changing its positioning slide.
That is changing. The technology to do the reasoning now exists. The data, in most enterprises, is already being generated. What's typically missing is the decision to stop treating a unified customer view as a data integration problem — which vendors will helpfully offer to solve, in ways that happen to expand their footprint — and start treating it as an intelligence problem. Those two things are not the same, and the distinction is where most of the value lives.
I'm Richard Owen, founder and CEO of OCX Cognition. We build predictive customer analytics for companies who'd prefer to know which customers are at risk before those customers have already decided to leave.
This is Part 3 of a six-part series on customer intelligence in the age of AI. Previously: Part 2 — The Cost of Looking Backwards. Next: Part 4 — Iceberg Dead Ahead? Customer Portfolio Management Is Not a Game.