Douglas W. Hubbard
Douglas W. Hubbard is an American management consultant, speaker, and author who developed the Applied Information Economics (AIE) methodology — a systematic approach to quantifying the value of information and making better decisions under uncertainty. He is founder and president of Hubbard Decision Research.
How to Measure Anything: Finding the Value of Intangibles in Business (2007, third edition 2014) is his most influential book. It has been described as a paradigm-shifting work in management decision-making — one of the few business books that is grounded in formal probability theory and decision science while remaining accessible to non-mathematicians.
How to Measure Anything (2007/2014)
The Central Myth: Some Things Can’t Be Measured
Hubbard begins by identifying the target: the widespread organizational belief that important things — brand value, management effectiveness, employee morale, strategic flexibility, innovation capacity — cannot be quantified.
“Correct a costly myth that permeates many organizations today: that certain things can’t be measured.” — Douglas Hubbard, How to Measure Anything
He argues this belief is almost never true, and that it causes organizations to make worse decisions by remaining in higher uncertainty than necessary. The solution is a redefinition of measurement:
“Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations.” — Douglas Hubbard, How to Measure Anything
By this definition, measurement does not require precision or certainty. It requires only that uncertainty decreases. This shifts the question from “Can we measure X exactly?” to “Can we reduce our uncertainty about X, and is the reduction worth the cost?”
The Clarification Chain
The practical tool for making “immeasurables” measurable:
“Clarification Chain: If it matters at all, it is detectable/observable. If it is detectable, it can be detected as an amount (or range of possible amounts). If it can be detected as a range of possible amounts, it can be measured.” — Douglas Hubbard, How to Measure Anything
When managers claim something is unmeasurable, Hubbard asks them to specify: what would be different about the world if the quantity were higher or lower? Those differences are observable. The inability to measure turns out, on examination, to be an inability to define — and definition is half the work.
The Value of Information
The key analytical innovation: before measuring, compute whether the measurement is worth making.
“If the outcome of a decision in question is highly uncertain and has significant consequences, then measurements that reduce uncertainty about it have a high value.” — Douglas Hubbard, How to Measure Anything
Expected Value of Information (EVI) = Reduction in Expected Opportunity Loss
Where: EOL = chance of being wrong × cost of being wrong
This framework forces rigor about why you are measuring. Measurement without a defined decision it supports is meaningless.
“The only valid reason to say that a measurement shouldn’t be made is that the cost of the measurement exceeds its benefits.” — Douglas Hubbard, How to Measure Anything
The Measurement Inversion
One of Hubbard’s most practically important observations:
“In a decision model with a large number of uncertain variables, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it typically gets.” — Douglas Hubbard, How to Measure Anything
Organizations measure what is easy to measure (revenue, headcount, ticket counts) rather than what most affects decision quality (the uncertain variables driving high-stakes choices). The things that get the most measurement attention are typically well-understood; the things most worth measuring are the ones where uncertainty is high and stakes are significant.
You Need Far Less Data Than You Think
“If you have a lot of uncertainty now, you don’t need much data to reduce uncertainty significantly.” — Douglas Hubbard, How to Measure Anything
The Rule of Five: “There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.”
This counterintuitive result means that even tiny samples tell you something statistically meaningful. The myth that “we don’t have enough data to measure” is usually wrong — you have more data than you think, and you need less than you think.
“Four Useful Measurement Assumptions: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think.” — Douglas Hubbard, How to Measure Anything
Fermi Decomposition
The technique of decomposing an uncertain quantity into components that are individually easier to estimate:
“Decomposition effect: The phenomenon that the decomposition itself sometimes turns out to provide such a sufficient reduction in uncertainty that further uncertainty reduction through new observations are not required.” — Douglas Hubbard, How to Measure Anything
About 25% of high-value measurement problems in Hubbard’s practice are resolved by decomposition alone. The act of breaking a complex uncertain quantity into components both clarifies thinking and often reveals that the individual components are much easier to estimate than the aggregate.
Calibrated Probability Assessments
Hubbard introduces the concept of calibration — training people to assess probabilities accurately. Most people are either overconfident (their 90% confidence intervals contain the true answer only 50-60% of the time) or underconfident. Calibration training can be taught and significantly improved.
“Assessing uncertainty is a general skill that can be taught with a measurable improvement.” — Douglas Hubbard, How to Measure Anything
Calibrated 90% confidence interval: a range where the true answer falls within 90% of the time. Not “I’m 90% sure it’s exactly 47” but “I’m 90% sure it’s between 30 and 70.”
Applied Information Economics: The Six-Step Framework
- Define the decision
- Determine what you know now (calibrated prior)
- Compute the value of additional information
- Measure where information value is high
- Make a decision and act
- Return to step 1 as new decisions emerge
“Measure what matters, make better decisions.” — Douglas Hubbard, How to Measure Anything
The Risk Paradox
“If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions. The largest, most risky decisions get the least amount of proper risk analysis.” — Douglas Hubbard, How to Measure Anything
Organizations apply rigorous analysis to small, well-understood decisions where it adds little value, while applying gut feel and narrative reasoning to large, consequential decisions where quantitative analysis would add enormous value. This is the Risk Paradox.
Bayesian Updating
Hubbard advocates Bayesian reasoning — formally updating prior beliefs with new evidence:
“When presented new information, we have no other option than to relate it to what we already know — there is no blank space in our minds within which new information can be stored so as not to ‘contaminate’ it with existing information.” — Clifford Konold, quoted by Hubbard
The Bayesian approach makes explicit what good reasoning requires: starting with a calibrated prior, gathering evidence, and updating the prior systematically.
Practical Significance
Hubbard’s work is most valuable to:
- Executives making large capital allocation or strategic decisions under uncertainty
- Analysts tasked with quantifying “soft” variables like brand value or employee morale
- Operations teams designing measurement systems for complex processes
- Anyone trying to justify (or challenge) a measurement investment
The book’s greatest contribution is not any single technique but the paradigm shift: from “we can’t measure this” to “let’s define it, estimate the value of information, and find the most cost-effective way to reduce uncertainty.”
Mathematical Accessibility
While Hubbard presents his material accessibly, the underlying mathematics (probability theory, Bayesian inference, Monte Carlo simulation) are genuinely complex. The book is more accessible than a statistics textbook but more rigorous than most management books. Readers who engage with the full technical depth will get considerably more out of it than those who treat it as a conceptual overview.
Related Concepts
- measurement-and-uncertainty-reduction — The primary concept derived from Hubbard’s framework
- incentives-and-information-asymmetry — Information asymmetry is the target that good measurement corrects
- business-financial-clarity — Crabtree’s financial metrics are a domain-specific application of Hubbard’s general measurement principles