Start Free Trial

Knowledge Base

How People Interact With Online Information — Insights From Dean Eckles

cx iconoclast

 

In a recent episode on the CX Iconoclast podcast, Richard Owen spoke with Dean Eckles — MIT professor and former Meta researcher — about how people engage with misinformation, how social corrections influence behavior, and what these dynamics reveal about customer experience, survey reliability, and data-driven decision-making.

 

1. When Corrections Backfire

 

Dean’s 2021 Twitter field experiment revealed a counterintuitive pattern: when users are corrected for sharing false information, a subset responds by sharing more low-quality or hyperpartisan content.

At the same time, corrections do reduce spread overall — primarily among people who see the correction downstream, not the original poster.

For CX leaders, this mirrors the gap between stated attitudes and actual behavior. Surveys alone often fail to reveal underlying drivers of churn or loyalty.
As described in the Customer AI Masterclass, Lesson 1.3: CX Metrics, and How You Should Use Them, attitudinal data must be interpreted alongside operational and financial behavior, not treated as standalone truth.

 

2. Why Community Notes Work at Scale

 

Community Notes–style systems dramatically reduce misinformation spread — but mainly among users at the periphery of the cascade. These users don’t know the source well and are still persuadable; context shapes their decision to reshare.

This is a system-reliability pattern: improving outcomes by influencing the broad middle, not the deeply committed minority.
This aligns with the Customer AI Masterclass, Lesson 3.3: The Customer AI Data Architecture: One Schematic to Rule Them All, which emphasizes designing systems that shape many small decisions at scale.

 

3. Survey Bias and the Advantage of Telemetry

 

At Meta, Dean worked on correcting survey bias using extensive behavioral telemetry — something most organizations can’t replicate. Platforms can reweight surveys because they know how all users behave; most enterprises have fragmented, incomplete data.

For CX teams relying heavily on surveys, this is a structural limitation.
As explained in the Customer AI Masterclass, Lesson 3.2: Data Is an Asset (Possibly Better Than Money), competitive advantage depends on treating data as a directly managed asset rather than exhaust.

And Lesson 3.4: Data Types: Everything You Need, Neatly Categorized shows why balanced attitudinal, operational, profile, and financial data is required to meaningfully correct bias — something surveys alone cannot deliver.

 

4. Predictive & Generative AI in Data-Poor Environments

 

Most companies lack the behavioral depth Meta enjoys. Their customer data is sparse, scattered across systems, and inconsistent.

Dean and Richard discuss how predictive and generative AI become essential in these environments — the only practical way to:

  • infer missing signals

  • approximate behaviors not directly measured

  • generate synthetic data

  • fill the VoC blind spots where surveys never reach

This reflects the Customer AI Masterclass, Lesson 2.3: Three Types of AI: The Three Amigos of the Customer AI Toolkit, where generative, predictive, and prescriptive models work as a system rather than standalone tools.

And Lesson 2.4: Mapping the Types to the Customer AI Problems shows how prediction becomes the backbone in data-poor enterprises, rebuilding the behavioral picture that doesn’t exist today.

 

5. When Data Clashes With Executive Intuition

 

A major theme: data often contradicts leaders’ pattern-recognition and intuition built over decades. Their skepticism is not always irrational — models are incomplete and data can be politically framed — but it creates friction when predictive models surface insights that feel counterintuitive.

This reflects a core organizational barrier in the Customer AI Masterclass, Lesson 1.1: Introduction: The Customer-Centric Organization (And the Challenge to Get There), where culture and governance shape whether data is actually used.

And Lesson 7.2: The Maturity Model outlines how organizations progress from “data as optional input” to “data as default operating logic,” reducing the influence of anecdote and hierarchy.

 

6. Human–AI Complementarity Is Still Rare

 

Dean references meta-analysis showing that while AI assistance improves human decisions, true complementarity — where human + AI outperform either alone — is uncommon. Most of the time, people simply “catch up” to the model’s baseline performance.

The opportunity lies in designing workflows where humans contribute context and judgment while AI narrows uncertainty.

This parallels the Customer AI Masterclass, Lesson 5.4: Action Framework and Lesson 5.6: Customer AI with Prescription, where AI reduces the decision space and filters out the worst options, while humans apply contextual nuance.

 

7. AI’s Real Value: Narrowing the Decision Space

 

Richard and Dean converge on a pragmatic view: AI’s purpose is not perfection. It is to:

  • eliminate the worst available decisions

  • reveal hidden risk

  • highlight meaningful patterns

  • reduce variance in outcomes

This is the backbone of the Customer AI Masterclass, Lesson 5.6: Customer AI with Prescription, where AI-generated recommendations function as guardrails rather than mandates, improving decisions across frontline teams and executive leadership.