top of page
Abstract Blue Geometric Design
Abstract Blue Light

The Illusion of Precision: Why We Trust Data Too Much

Private Investigator Questioning The Reports
Private Investigator Questioning The Reports

By J.M. Abrams, Chief Data Culturist — http://www.dataculturehivemind.com


We treat numbers like they’re infallible. If a metric shows up in a dashboard, a spreadsheet, or a slide, we accept it. No questions asked. The number becomes the truth. The graph becomes the proof. But is it? This trust in data isn’t always earned. And it can lead to bad decisions.


We Trust the Package, Not the Process


We’ve inherited a long-standing bias: if something is printed, it must be true. That same instinct has quietly carried over to data. When numbers are wrapped in tables, charts, and dashboards, they look serious. Precise. Authoritative. We assume they’ve been verified. We believe they mean something important.


But data is not neutral. It’s shaped by who collected it, how it was processed, and what was chosen to be shown—or left out.


Numbers in a Lab Coat


Data wears a lab coat. It looks objective. But the moment we visualize it, summarize it, or filter it, we’re adding interpretation.


This is what I call the illusion of precision—the idea that a number with two decimal points must be correct. A line graph must reflect the truth. A bar chart must tell the whole story.


We rarely pause to ask:

  • Who created this metric?

  • What does it include—or exclude?

  • Is this the correct data for the question we’re asking?


Four Biases That Distort Our Thinking


  1. Authority Bias

    We believe numbers because they look official, especially if they come from a tool or person we trust. This is known as authority bias, where people defer to perceived experts even when the content may be flawed or unverified (Barron’s, Wikipedia).


  2. Framing Effect

    How data is presented changes how we interpret it, even if the facts stay the same. The framing effect is well-documented in behavioral psychology. People make different decisions when identical data is phrased positively versus negatively (Wikipedia, Forbes).


  3. Cognitive Ease

    Our brains prefer things that are easy to process. So, we trust clean visuals and tidy dashboards more, even if the data is flawed. This cognitive fluency trick makes polished presentations feel more trustworthy than they are (arXiv preprint).


  4. Confirmation Bias

    We accept data that supports what we already believe and discount data that doesn’t. Confirmation bias leads us to cherry-pick metrics and ignore red flags. It’s been widely observed in both business decisions and scientific research (PMC).


The Spreadsheet Halo Effect


I’ve seen this too many times in organizations: a metric lands in a spreadsheet or presentation, and suddenly it has power. The formatting gives it legitimacy. But no one asks where it came from. And by the time someone does, it’s too late—the decision’s been made.


A Better Way to Think About Data


We need to flip the question. Not: Is the number there? But: Can I trust how it got there?

That simple shift leads us to a practice called data profiling—a way of examining the structure, quality, and patterns in data before we use it.


You don’t need to be a data engineer or analyst to profile data. You need to ask thoughtful, grounded questions that expose the shape, health, and behavior of the numbers you’re looking at.

Let’s start with three foundational questions. They belong in every meeting where data is used:


1. What’s the source of the data?


Is it coming from a trusted system? Was it entered manually? Was it copied, pasted, or altered somewhere along the way?


2. Was the data transformed, cleaned, or filtered, and how?


Every dashboard has filters. Every spreadsheet has formulas. Understand what’s being included, excluded, or reshaped before concluding.


3. What’s missing or excluded?


Sometimes the most significant clue isn’t what’s in the data, but what’s not. Are certain groups, periods, or metrics left out? These three questions form the core of data literacy. But you can go even further. Below are five practical checks anyone can do—even without technical tools.


4. Check for Empty Fields


Are there rows or columns with a lot of missing data?


If 40% of rows have blanks in a key column like “diagnosis code” or “payment date,” that’s a red flag. It tells you the data might be incomplete or unreliable.


5. Look at Value Ranges


Do the numbers make sense?


If a report shows a person aged 145, or an invoice for $0.01 and $900,000 on the same chart, something might be wrong. Always scan for the highest, lowest, and average values to spot outliers.


6. Ask About Consistency


Are names, labels, or categories standardized?


You might find “Alabama,” “AL,” and “Ala” all listed as separate entries in a state column. That’s a sign of poor data hygiene. Clean data usually groups things consistently.


7. Validate Totals and Logic


Do subtotals and percentages add up?


If the report says 70% of patients are male, 40% are female, and 10% are other… we have a problem. Always check that numbers reconcile with one another.


8. Check the Time Dimension


Is the data current or outdated?


A report that uses data from two years ago—without a timestamp—can lead to completely wrong decisions. Make sure you know the time frame behind the data.


Bonus: Ask Where the Data Came From


Is this raw data, or has it been summarized? Did someone validate it, or is it straight from an unchecked source? Did anyone change the numbers in Excel after they were exported?

Knowing the lineage of the data—where it came from and how it moved—is one of the best ways to avoid surprises.


Data Profiling Is Just Common Sense


You don’t need a technical background to question data. You need curiosity and a willingness to slow down.


Profiling is like proofreading a document. You’re not rewriting it—you’re just checking if it makes sense.


In the age of automation and AI, taking even a few minutes to profile your data can save hours of cleanup, rework, or worse—bad decisions.


Final Thought


Data doesn’t speak for itself. It speaks for whoever collected, shaped, and displayed it.

If we want a stronger data culture—one that drives better decisions—we must get past the illusion. We must stop worshiping the number and start questioning the process behind it.

Because not all clean data is good data. And not all numbers tell the truth.


Written by Jose Abrams, creator of the Data Culture Hive Mind blog, where data meets human behavior.🔗 Visit the blog here.


Disclaimer: The opinions expressed on this blog are solely those of the author and do not reflect the views, positions, or opinions of my employer.

Comments


bottom of page