The Missing Contract: How to Wire Every Metric to a Move

Published: September 2025
By Amy Humke, Ph.D.
Founder, Critical Influence

missing_contract

Dashboards don't fail because they show the wrong number. They fail because no one pre-committed to what would happen when the number changed.

I call that the pre-commit contract: If X happens, then Y action fires; owned by this person, by this date. It's the spine of actionable analytics and the fastest cure for the adoption gap I've seen in my career. This article turns the idea into a playbook, with field-tested practices and concrete examples from education, retail, sales, and healthcare.


What a "Pre-Commit Contract" Really Is (and why it works)

Different communities have rediscovered the same mechanism with different labels:

Underneath all those labels is the same move: wire the decision before you look at the data. It eliminates ambiguity, short-circuits status debates, and makes bias-resistant action the default.


The Playbook: Best Practices for Pre-Commit Contracts

1. Name the trigger precisely

Ambiguity kills action. Define the metric, the cutoff, the lookback window, and any smoothing (e.g., "7-day average conversion drops ≥2 points vs. 4-week baseline"). Use control charts to see exactly when performance moves outside normal variation. If seasonality is a factor, build your baseline with at least two full seasonal cycles. For quarterly businesses, that means two years; for retail cycles tied to fall, holiday, spring, and summer, that means two of each. Guardrails only work when thresholds reflect reality, not noise.

2. Tie each trigger to a concrete move

"Investigate" on its own isn't a move; it's a stall. But investigation can be the first wired step if it's structured.

Each lens has a move: supply → replenish, demand → markdown, conversion → escalate UX, economics → adjust pricing, data/process → governance fix, etc. "Investigate" isn't an open loop; it's a contract to run the lenses and fire the wired move.

3. Assign an owner with authority

Every KPI needs a human or role explicitly tied to it, not "the team." When the light turns red, there should be no debate about who moves next. Ownership becomes stronger when documented in job descriptions, tied to performance reviews, and visible across the organization.

4. Instrument alerts where work happens

Pre-commit rules hidden in a binder or wiki don't work. Wire alerts to where people operate — Slack, Teams, CRM, EMR — so the trigger instantly reaches the owner. Dashboards should also show threshold status (R/Y/G) beside the metric so the move is obvious.

5. Close the loop

Every trigger should leave a trail. Log the action taken, who owned it, when it fired, and the outcome. Review that log quarterly to recalibrate thresholds, swap ineffective playbooks, or retire noisy triggers. This is how decision systems mature: turning "we acted" into "we learned."

6. Use tiers, not cliffs

Binary triggers overreact to small blips and underreact to big swings. Graduated thresholds (Yellow/Orange/Red) produce proportionate responses: a mild dip prompts a light correction, a severe one escalates to leadership. Guardrail bands keep actions consistent and reduce lurching between overconfidence and panic.

7. Decide before you measure

Decision intelligence stresses designing the decision first, including who acts, how, and under what conditions, before designing the metric. That sequence avoids bias, wasted analysis, and "mirror dashboards" that reflect the past but don't guide the future.

8. Document and communicate the contract

The pre-commit shouldn't live in someone's head. Write it down in your playbooks, templates, and dashboards. Make the threshold, owner, and move visible to everyone who touches the metric. Transparency is what builds trust, speeds action, and avoids silent stalls.


Field Guide Example: Higher Education

Early Warning Systems that actually intervene

Colleges and universities often deploy Early Warning Systems to monitor leading indicators of student persistence, such as course performance, Learning Management System (LMS) engagement, and attendance in gateway courses. The key is that these signals are tied to pre-committed interventions, not left to chance.

Example:
- Trigger: Cumulative GPA < 2.0 after the first term or failure in two gateway courses.
- Move: Within five business days, an advisor initiates a structured success plan, schedules a mandatory advising meeting, connects the student to academic support services, and places them on a tailored intervention track.
- Owner: Assigned academic advisor, with oversight from the student success office.
- Loop: Intervention documented in the advising platform, with a required follow-up check at four and eight weeks to assess progress and adjust supports.

This is higher-ed pre-commit in action: the threshold is explicit, the owner is clear, and the move is logged — so the signal doesn't stall at "someone should check on them."


Design the Decision First

Decision Intelligence (DI) is the bridge between analytics and action: it starts with the decision, who will do what, under what conditions, and only then designs the metrics and models. In other words: no orphan metrics.

My rule of thumb is simple: If the metric doesn't have a named owner, a threshold, or a move, it doesn't belong on the dashboard.


How to Ship Pre-Commit, Not Just Dashboards (step-by-step)


Why this matters for governance, adoption, and ops

A pre-commit contract doesn’t add bureaucracy — it accelerates velocity. Once you wire metrics to moves, three bigger levers in your data practice fall into place:


Templates you can steal

Use this 1-page canvas for each KPI/model output:

Fill one in for every KPI on your dashboard. If you can't fill it in, the KPI probably doesn't belong.


The final ask (leaders and doers)

Because the difference between "we knew" and "we moved" is decided before the number changes.

← Back to Articles