Believe it or not, despite decades of research, there is still no ironclad, Newton/Einstein-grade, scientific proof that smoking causes cancer. Regardless of what you may think to the contrary, this is true. After all, there is not yet a general theory of cancer that can be empirically tested.
And yet, despite this lack of "smoking gun" scientific evidence, everyone "knows" that smoking causes cancer and we at least
try to act accordingly. After all, there must be some reason for bans on smoking in public, taxes on tobacco, restrictions on cigarette sales to minors, and so forth.
In business as well, there are lots of decisions that must be made in the absence of definitive proof to guide us. In fact, requiring such proof before moving forward is known by another term: "paralysis by analysis."
But
how do we strike a balance between not taking action due to a lack of certainty and knowing we have enough information to act confidently? One way is to look for robustness.
Simply put,
robustness is the existence of multiple means of determining something, and those means are at least partially independent of each other. (Thanks to University of Chicago's
William Wimsatt for this definition.)
In the smoking causes cancer example, numerous studies, with large samples, across different populations, have found a strong correlation between smoking and cancer. Further, many studies of lung cancer patients (from all over the world) find "smoking gunk" in the lungs of people who are smokers. So Wimsatt's criteria are met: Lots of evidence that is more independent than not.
What does robustness look like in a business setting? Some guidelines:- Several analyses that use different data sources, analysis methods and more, are available and reviewed.
- When those analyses are reviewed, common threads are seen. Things are invariant among the different studies, even identical.
- When examining the range of processes that are invariant among the analyses, the conditions which must exist for the common threads are understood. When there are differences, those too are understood and explained.
Years ago, for example, when I was in the paperboard business,
I recommended a large price increase for an important profit maker for the company - something called "Mist White," my company's trade name for mottled white liner board. This was paperboard commonly seen on the outside cases of paper used then in Xerox machines and now in laser printers. Up until that point, management had limited price increases for fear of "killing the golden goose." The concern was that by increasing profits our competitors would add capacity.
At the time, there were just three other paper mills in the world that made mottled white liner board. I looked at public data on how their mills were operated and found that in all cases, these mills normally made regular linerboard, the stuff used in everyday
brown corrugated boxes. To make mottled white, they needed to buy bleached pulp on the open market.
For a number of reasons, we didn't.
For my company, bleached pulp was essentially free. That led to a simple formula based on linerboard and bleached white pulp prices that showed the minimum price the competition needed to justify making mottled white.
Next, I did several analyses of historical pricing over different time periods and found the pricing model predicted competitor pricing in all cases, including when different data sources were used.
Next, my company tried raising prices. We monitored orders, competitor prices and so forth. Volume held and my pricing model became company policy.
Did we have definitive, scientific proof? No.
If we required that we'd still be waiting. What we did have, however, was robustness: The entire universe of competitors was studied; the model was tested against historical data from different sources, over different time periods. Then we tested the price increase with a "live" test before making it policy.
A few tips:- Lots of confirming data from the same data set is not redundancy. I've written before about the limits of small sample sizes and overly restrictive selection criteria in the data.
- Consider collecting data serially, in order to make adjustments along the way and to limit cost. For example, when I worked for Kraft Foods, we developed Miracle Whip in a squeeze bottle. First, market research suggested customers wanted the product. Then, competitors came out with it and customers bought the product. Next, R&D developed a bottle based on "squeeze" engineering and tested the formulas, etc., on paper. Then, prototype squeeze bottles were made by a vendor and tested in the lab. When we got to live trials with employees (and after fixing a persistent problem with people dropping the bottle) we felt we had gathered enough robust data and were ready to launch.
- Keep the size of the decision in mind. If the stakes are small and/or the decision can be undone, don't get hung up on too much robustness.
Real business decisions are often not straightforward or simple. Perfect data never exists, time is short, and resources limited.
And while "scientific proof" may never be available, looking for robustness can lead to better decisions, fewer mistakes and less time wasted on dead-end paths.