Measurement and Uncertainty Reduction
Douglas Hubbard’s How to Measure Anything challenges one of the most persistent myths in management: that certain important things — management effectiveness, brand value, strategic flexibility, employee morale — are inherently unmeasurable. Hubbard argues that this belief is almost always wrong, that it causes organizations to make worse decisions than they need to, and that a systematic approach to measurement can deliver significant value across virtually any domain.
The Core Redefinition of Measurement
The foundational move is definitional:
“Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations.” — Douglas Hubbard, How to Measure Anything
This definition differs radically from the intuitive understanding of measurement (precision, certainty, laboratory conditions). Hubbard requires only that after a measurement, you know something you didn’t know before — that uncertainty is reduced, even if not eliminated.
“The concept of measurement as ‘uncertainty reduction’ and not necessarily the elimination of uncertainty is a central theme of this book.” — Douglas Hubbard, How to Measure Anything
This recasts the question. Instead of “Can we measure X precisely?” the question becomes “Can we reduce our uncertainty about X, and is that reduction worth the cost of obtaining it?”
The Clarification Chain
Most “unmeasurable” things, Hubbard argues, turn out to be measurable once properly defined:
“If it matters at all, it is detectable/observable. If it is detectable, it can be detected as an amount (or range of possible amounts). If it can be detected as a range of possible amounts, it can be measured.” — Douglas Hubbard, How to Measure Anything
The practical corollary: “Figure out what you mean and you are halfway to measuring it.” The apparent immeasurability of concepts like “management effectiveness” or “customer satisfaction” usually dissolves when forced to specify what observable consequences would differ if the concept were higher or lower.
The Value of Information
Hubbard introduces the Expected Value of Information (EVI) as the correct criterion for deciding whether to measure something:
“If the outcome of a decision in question is highly uncertain and has significant consequences, then measurements that reduce uncertainty about it have a high value.” — Douglas Hubbard, How to Measure Anything
EVI = Reduction in expected opportunity loss (EOL) EOL = chance of being wrong × cost of being wrong
The practical insight: measurements that feel trivial (reducing uncertainty by 10% on a 10M decision) are extremely valuable — worth investing significantly to obtain.
This generates a counterintuitive principle Hubbard calls the Measurement Inversion:
“In a decision model with a large number of uncertain variables, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it typically gets.” — Douglas Hubbard, How to Measure Anything
Organizations measure what they can conveniently measure (outputs, counts, easily observable quantities) rather than what most affects decision quality (uncertain variables with high stakes).
Four Useful Measurement Assumptions
Against the myth of immeasurability, Hubbard proposes four default assumptions:
“It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think.” — Douglas Hubbard, How to Measure Anything
The third is the most counter-intuitive:
“When you know almost nothing, almost anything will tell you something.” — Douglas Hubbard, How to Measure Anything
“If you have a lot of uncertainty now, you don’t need much data to reduce uncertainty significantly.” — Douglas Hubbard, How to Measure Anything
This is the opposite of the naive assumption that hard-to-measure things require massive data sets. When current uncertainty is high, a small sample reduces it substantially. When current uncertainty is low, it takes a very large sample to move the needle further.
The Rule of Five and Small Samples
Hubbard offers a striking statistical result:
“Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.” — Douglas Hubbard, How to Measure Anything
This means that 5 random observations tell you something statistically meaningful about a population. Combined with calibrated estimation, even very small samples can substantially reduce uncertainty.
Fermi Decomposition
The Fermi approach — used by physicists to estimate orders of magnitude before detailed calculation — is a practical measurement technique:
Decompose an uncertain quantity into components that are individually easier to estimate. The decomposition itself often reduces uncertainty significantly — sometimes enough that no additional data collection is needed.
“Decomposition effect: The phenomenon that the decomposition itself sometimes turns out to provide such a sufficient reduction in uncertainty that further uncertainty reduction through new observations are not required.” — Douglas Hubbard, How to Measure Anything
“About 25% of the high-value measurements were addressed with decomposition alone.” — Douglas Hubbard, How to Measure Anything
Applied Information Economics: The Framework
Hubbard proposes a six-step framework he calls Applied Information Economics (AIE):
- Define the decision
- Determine what you know now
- Compute the value of additional information (if zero, skip to step 5)
- Measure where information value is high
- Make a decision and act on it
- Return to step 1 as actions create new decisions
“Measure what matters, make better decisions.” — Douglas Hubbard, How to Measure Anything
Connection to Business Financial Metrics
Greg Crabtree in Simple Numbers, Straight Talk, Big Profits! operates on the same principle applied specifically to business financial metrics: most entrepreneurs make decisions on inadequate or misleading financial data, and this is the primary source of their poor business outcomes.
“You know, it’s funny. We always make money on the spreadsheet, but at the end of the year it’s not in the bank account.” — Greg Crabtree, Simple Numbers
Crabtree’s solution — gross profit per labor dollar, market-based salaries, core capital targets — is a specific application of Hubbard’s principle: define the key decisions (can I afford this hire? is my business profitable?), identify the measurements that bear on those decisions (gross profit/labor ratio, pretax profit %, core capital), and measure those things rigorously rather than relying on intuition or incomplete metrics.
Levitt’s Parallel: Data and Causation
Levitt’s Freakonomics uses data to challenge conventional explanations — identifying causes of crime, wage gaps, and name effects. His methodology is implicitly Hubbard’s: identify a question with real stakes, find measurements that reduce uncertainty about the answer, and follow the evidence regardless of where it leads.
“Knowing what to measure and how to measure it makes a complicated world much less so.” — Levitt and Dubner, Freakonomics
“A correlation simply means that a relationship exists between two factors… but it tells you nothing about the direction of that relationship.” — Levitt and Dubner, Freakonomics
Levitt’s insistence on the correlation/causation distinction is a specific instance of Hubbard’s broader epistemological rigor: what exactly does this measurement tell us, and what does it leave uncertain?
Practical Implications
- Before claiming something is immeasurable, work through the clarification chain. What would be different if the thing being measured were higher or lower? Those differences are observable and can be measured.
- Compute the value of information before measuring. Not everything should be measured. Only measure when the cost of being wrong (in the decision that depends on the measurement) exceeds the cost of measuring.
- Start with small samples. When uncertainty is high, 5-10 observations dramatically reduce it. Don’t wait for massive data sets; iterate.
- Decompose before collecting data. Breaking a complex quantity into components often reveals that the uncertainty was lower than it appeared, or that the critical uncertainty lies in one specific component.
- Apply the Measurement Inversion. Look at your highest-stakes decisions and ask what they depend on that you’ve never measured. That is almost certainly where measurement attention is most underallocated.
Precision vs. Accuracy
A common error in measurement is confusing precision with accuracy. Hubbard distinguishes: precision is consistency (the measurement gives the same result each time), while accuracy is closeness to truth. A precise but inaccurate measurement (systematically biased) is worse than an imprecise but unbiased one. Optimize first for unbiasedness, then for precision.
Related Concepts
- incentives-and-information-asymmetry — Information asymmetry is one of the primary drivers of poor decisions; measurement reduces it
- business-financial-clarity — Crabtree’s financial metrics are a domain-specific application of Hubbard’s measurement principles