One Metric That Matters
The One Metric That Matters (OMTM) is the core concept of Alistair Croll and Benjamin Yoskovitz’s Lean Analytics framework. The framework argues that startups — and business units within larger companies — fail at analytics not because they lack data but because they track too much data and focus on the wrong signals. The OMTM principle cuts through this with a deceptively simple prescription: at any given stage of development, there is one metric that matters more than all others, and you should organize your attention and experimentation around it.
“Lean, analytical thinking is about asking the right questions, and focusing on the one key metric that will produce the change you’re after.”
What Makes a Good Metric
Before the OMTM can be identified, the metrics being considered must meet quality criteria. Croll and Yoskovitz specify four characteristics:
Comparative: Metrics are only meaningful in context. “2% conversion” tells you nothing. “Conversion increased from 1.2% to 2% week-over-week against a baseline of 1.8% for cohorts acquired through paid search” tells you something actionable.
Understandable: If people in the organization cannot explain the metric in plain language, it cannot change behavior. A metric that requires a data scientist to interpret cannot drive day-to-day decision-making.
A ratio or rate: Ratios are inherently comparative, easy to act on, and reveal relationships between variables. “Distance traveled is informational. But speed — distance per hour — is something you can act on.”
Behavior-changing: This is the most critical criterion:
“A good metric changes the way you behave. This is by far the most important criterion for a metric: what will you do differently based on changes in the metric?”
A metric that does not produce behavioral change is worthless regardless of how accurately it is tracked.
Vanity vs. Actionable Metrics
The framework’s most influential distinction:
Vanity metrics make you feel good but don’t guide action. Total registered users, cumulative page views, press mentions, app downloads — these numbers typically grow as long as the company grows and obscure more than they reveal. A startup with 100,000 registered users and a 0.5% monthly active rate has a serious problem that the total registration number conceals.
Actionable metrics connect to decisions. Conversion rate, daily active users as a percentage of monthly active users, churn by acquisition cohort, revenue per customer by channel — these metrics point to something specific you can change.
“If you have a piece of data on which you cannot act, it’s a vanity metric. If all it does is stroke your ego, it won’t help.”
“Whenever you look at a metric, ask yourself, ‘What will I do differently based on this information?’ If you can’t answer that question, you probably shouldn’t worry about the metric too much.”
Leading vs. Lagging Indicators
Lagging metrics explain the past: last month’s revenue, annual churn rate, total customer count. They are accurate (they describe what happened) but not useful for changing outcomes.
Leading metrics predict the future: engagement trends in the first 30 days (predictive of long-term retention), session frequency in week one (predictive of monthly active user rates), feature adoption rates (predictive of upsell conversion).
“Leading metrics give you a predictive understanding of the future; lagging metrics explain the past. Leading metrics are better because you still have time to act on them — the horse hasn’t left the barn yet.”
The most powerful leading indicators are causal — where changes in the leading metric actually drive changes in the outcome metric, not merely correlate with them:
“Correlation is nice. But if you’ve found a leading indicator that causes a change later on, that’s a superpower, because it means you can change the future.”
The ESRV Stages Framework
The framework argues that the appropriate OMTM is stage-dependent — what matters at Stage 1 is different from what matters at Stage 4. Croll and Yoskovitz describe five stages:
Empathy Stage: The primary question is whether there is a real problem worth solving. The metric is not quantitative — it is the number and quality of customer conversations that validate genuine pain. “If you can’t find 15 people to talk to, well, imagine how hard it’s going to be to sell to them.”
Stickiness Stage: Does the solution work well enough that users return? The core metric is retention. “The fundamental KPI for stickiness is customer retention. Churn rates and usage frequency are other important metrics to track.”
“stickiness comes before virality, and virality comes before scale.”
Virality Stage: Are satisfied users spreading the product? The metric is the viral coefficient — the number of new customers each existing customer successfully converts. A viral coefficient above 1 means the product is self-sustaining; between 0.75 and 1 means growth is helped; below 0.75 means virality is contributing but not driving.
Revenue Stage: Can the business model sustain itself? The metrics are Customer Acquisition Cost (CAC) and Customer Lifetime Value (CLV). The core ratio: CLV/CAC should exceed 3:1 for a healthy SaaS business. A key rule: “spend less than a third of the money you expect to gain from a customer on acquiring that customer.”
Scale Stage: Can the model expand to new markets? The metrics shift to ecosystem health — channel efficiency, market saturation indicators, cohort quality across new segments.
The critical implication: a company that tries to optimize virality before achieving stickiness is building on sand. Users acquired through viral channels will churn at the same rate as users acquired through paid channels if the product isn’t sticky. The stage sequence is not optional — it reflects the actual logical dependencies of business development.
Key Business Model Benchmarks
The framework provides specific benchmarks for common business models:
SaaS:
- 30% of registered users active at least monthly (good)
- 10% of monthly users active daily (good)
- 5% week-over-week growth in active users (healthy pre-revenue)
- 5% weekly revenue growth (healthy post-revenue)
- Churn: monthly churn above 5% is problematic; to calculate average customer lifetime: divide 100 by the monthly churn percentage (2.5% monthly churn = 40-month average lifetime)
Customer Acquisition:
- CAC/CLV ratio: CAC should be under 33% of CLV
- For low-value SaaS products (CLV under $5K): acquisition cost should be 11–15% of CLV
- For high-value products (CLV over $50K): acquisition cost can rise to 2% of CLV
Email engagement:
- Well-run campaigns: 20–30% open rate, 5%+ click-through
- Segmented campaigns outperform unsegmented by ~15%
The Cohort Lens
A critical methodological contribution: aggregate metrics obscure the information that cohort analysis reveals. User count going up while engagement going down means the business is struggling — but the aggregate numbers look fine if new users are offsetting churning users.
“Cohort analysis can be done for revenue, churn, viral word of mouth, support costs, or any other metric you care about.”
Cohort analysis tracks groups of users acquired in the same time period through their entire relationship with the product. Changes in behavior patterns across cohorts reveal whether product improvements are actually changing user experience or just changing who joins.
The OMTM Discipline
The practical implementation: at any given stage, identify the single most important metric — the one that, if improved, would most advance the business toward the next stage — and focus experiments and resources on moving that metric.
This requires explicit discipline against two failure modes:
Premature optimization: Optimizing revenue metrics before achieving stickiness, or optimizing virality before achieving empathy validation. Each stage has prerequisite conditions.
Metric proliferation: Tracking dozens of metrics, each of which creates a competing organizational priority. The OMTM discipline doesn’t mean ignoring other metrics — it means recognizing that most metrics are diagnostic (monitored to detect problems) while one is directional (optimized to advance).
“Capture everything, but focus on what’s important.”
The Build-Measure-Learn Application
The OMTM integrates directly with the build-measure-learn loop: the metric to be measured in each experiment should be the OMTM for the current stage, and the learning should directly update the hypothesis about how to move that metric. This prevents the “science fair” problem — running experiments for their own sake rather than systematically reducing uncertainty about what matters most.
“No feature should be built without a corresponding metric on usage and engagement. These sub-metrics all bubble up to the OMTM; they’re pieces of data that, aggregated, tell a more complete story.”
OMTM vs. OKR Frameworks
The OMTM concept and OKR frameworks (okrs-objectives-and-key-results) serve related but distinct purposes. OKRs organize organizational goal-setting across multiple objectives simultaneously. OMTM argues for concentrating focus on a single metric at a given stage. The two frameworks can coexist: OKRs provide the quarterly goal structure, while OMTM provides the weekly operational focus — but organizations need to be deliberate about the relationship or the metric proliferation that OKRs can produce undermines the focus that OMTM requires.