The Question Is the Problem: 5 Traps That Sabotage Analysis Before It Starts
Published: June 2025
By Amy Humke, Ph.D.
Founder, Critical Influence
The most challenging part of data work comes after the data is pulled: analyzing, modeling, visualizing, and storytelling. But most analysis failures don't start at the keyboard. They begin in the first meeting, the first message, and the first request—when the question is flawed, vague, or quietly sabotages the path to insight.
Here are five common traps that derail analysis, each with real-world examples and a path to reframing them for clarity and impact.
1. Undefined Aim: The Clarity Mirage
What it sounds like:
"Can you give us a breakdown of student engagement by region for Q2?"
Why it fails:
There's no defined decision or objective. The request masquerades as clarity but offers no direction. It sounds focused at first glance—it has a metric, a timeframe, and a segment. But ask one more question, "To make what decision?" and you find no destination. Just motion. No meaning. Without understanding what's at stake or what action might follow, analysts are left guessing what matters.
Example 1: The Enrollment Deep Dive
An executive requests five years of enrollment data by program and demographic. The dashboard is built, charts are polished, and trends are clear. But in the review, they say:
"Actually, I wanted to know which changes we made last year helped with retention."
The problem wasn’t the data. It was the question.
Example 2: The Social Media Engagement Ask
A marketing team asks for "engagement over time" on social media posts. The analyst pulls likes, shares, and comments by week. But the team had actually hoped to learn which types of posts led to sign-ups for a webinar series.
Engagement was a proxy—not the point.
Reframe it:
What decision are you trying to make?
2. Undefined Success: The Moving Target
What it sounds like:
"Can you see if the campaign increased engagement?"
Why it fails:
It sounds measurable—until you ask, "What counts as engagement?" Clicks? Shares? Time on page? Repeat visits?
And how much of an increase is meaningful? Without clear success criteria, you can’t tell whether the results are good, meaningful, or worth acting on.
Example 1: The A/B Test That Stalled
A product team runs a test. Version B improves the metric by 2.1%, but no threshold for action was set. Stakeholders are split—some say it’s a win, others call it noise. No decision gets made.
Example 2: The Outreach Campaign
A university rolls out a web campaign to nudge applicants toward registration. Impressions spike, but registrations barely move.
Marketing celebrates visibility.
Operations question the cost.
Leadership shrugs.
Without shared definitions, no one agrees on what the results mean.
Reframe it:
What result would make this worth scaling—and what would make it not worth pursuing?
3. Misaligned Measures: What's Easy Isn't What's Right
What it sounds like:
"Let's look at conversion rate; it’s already in the dashboard."
Why it fails:
The easiest metric to pull doesn’t always answer the real question. We default to what’s available instead of what’s relevant.
Example 1: The Email Campaign Illusion
Marketing evaluates a campaign using open rates. The numbers look good, but sign-ups are flat. It turns out the email content worked—users just dropped off on the landing page.
Example 2: The Financial Aid Follow-Up
A school evaluates aid reminders using "portal logins." But logging in doesn’t mean students completed verification. The right metrics—like FAFSA completion—weren’t even in the default dataset.
Reframe it:
What specific behavior or outcome shows this initiative is working?
4. Downstream Metrics: Too Far from the Action
What it sounds like:
"Let’s measure the impact on revenue; it’s what leadership cares about."
Why it fails:
Revenue is easy to quantify but too far downstream. When the initiative happens early in the process, too many other variables dilute the signal.
Example 1: The Outreach Campaign
An enrollment team launches an early application campaign. They use total tuition revenue as their metric—but revenue depends on advising, financial aid, transfer credit, persistence, and more.
By the time revenue rolls in, the signal is gone.
Example 2: The Advising Pilot
A university tests personalized advising for new students. They look at fiscal-year revenue impact. In reality, registration speed and credit loads increased immediately—just not revenue.
Reframe it:
What’s the earliest meaningful metric that could reflect the impact and still link to long-term goals?
5. Unspoken Context: The Missing Variable Problem
What it sounds like:
"Why is Region A underperforming?"
Why it fails:
The data reflects what’s captured—not everything that matters. Without context, analysis misattributes causes or misses key external factors.
Example 1: The Regional Mystery
Region A is down in enrollment. Dashboards suggest poor follow-up. Eventually, someone mentions stricter fraud filters in that region.
What looks like underperformance is actually success.
Example 2: The Summer Term Surprise
Applications drop unexpectedly. Panic sets in—until a recruiter reveals digital ads were paused for two weeks due to a budget glitch.
The analyst had no way of seeing that in the data.
Reframe it:
Has anything changed operationally or contextually that wouldn’t show up in the data?
How to Combat These Traps
Fixing these issues doesn’t require more complex models. It requires better conversations up front:
- Clarify the decision. Ask what the stakeholder is trying to decide or influence.
- Define success early. Agree on what “good enough” looks like before touching the data.
- Push for relevance. Don’t settle for the easiest metric—go after the one that shows real impact.
- Follow the chain. Choose metrics that are close enough to the change to show a signal.
- Ask about context. You can’t model what you don’t know exists.
These aren’t nice-to-haves. They’re the foundation.
Because no matter how clean the code or how elegant the chart—
if the question is broken, the analysis will be too.
Fixing it starts before the data ever hits your screen.