Is your conflicting evidence "signal" or "noise?"
Time to Act
September 2016 Vol. 5 No. 9
Charlie Goodrich
Hello,

Every business relies on models. But when you come across information that conflicts with your model, how do you decide if it is a "signal" or just "noise." 

Knowing the difference - the subject of this month's newsletter - is an important and often underappreciated business skill.

I appreciate your comments. Simply reply to this email to send them to me.
Regards,

Charlie
Charlie Goodrich
Founder and Principal
Goodrich & Associates
In this issue...
mainArticle
Which is Flawed? Your Model, Or Anecdotal Evidence to the Contrary?

When does anecdotal evidence that conflicts with your model mean your model is wrong?

That was the essence of a recent discussion between two University of Chicago professors: Eugene Fama, who won the Nobel prize for his work showing stock markets are efficient, and Richard Thaler, who says they are not efficient, or at least not as efficient as Fama claims.

(I am in the Fama camp as most of Thaler's examples of market inefficiencies occur when the conditions for efficient markets temporarily don't occur. More on that later in today's newsletter.)

All of us run into the same question frequently, but usually in a different context. We use models explicitly to make financial predictions (for example) and implicitly to predict how individuals and groups will behave, whether they are customers, competitors or just people we encounter in daily life.

And that's the challenge. As I explained in my newsletter, "Listening to the Noise," when your "sampling" comes across information that conflicts with your model, how do you decide if it is a "signal" or just "noise."

More pragmatically, when - or can - conflicting anecdotal evidence, by itself, mean your model is wrong? Furthermore, when should you take the time and effort to investigate?

Before investigating contrary anecdotal evidence, first consider these important questions:
  • Does the preponderance of evidence support the model? (i.e., Is what you are seeing just an outlier?) Remember, models are an approximation of how things in the world occur; they are not exact. A few aberrations do not disprove your model. If there are mountains of confirming data, move on.
  • Is the conflicting anecdote significant by itself? In the world of highly measured and controlled data, statistics can tell us from looking at variances whether those variances are statistically significant or not - there is an empirical definition of what is noise and what is a signal.

    However, most models used in business are not like that. Experienced-based judgment must be used because there are no true statistical measures. Some business prediction models, such as financial forecasts, should, however, have an "after the result" (think month or quarter end) variance analysis and the size of that variance can be put into perspective.
  • Are you using the model for something other than for what it was intended? A 13-week cash flow, for example, is not an appropriate model for projecting multi-year financials. Similarly, attempting to predict large group behavior based on stereotypes of how people will act doesn't aggregate very well.

    Perhaps you used the model you always use without much thought. The mind trap of anchoring can make it easy to miss the signs your model is the wrong one for its intended purpose. Or, maybe the model was appropriate at one point in time but its purpose has changed and it now needs to be more robust or of a different type.

    In all cases, make sure you are applying the right tool to the right situation.
  • Is the model too simple? Did you take short cuts because of time constraints and the demands of other priorities? In my article, "When Projecting Financial Statements, Don't Plug Cash To Make The Balance Sheet Balance," I provide an example of a financial modeling approach that won't forecast cash flow well.

    One sign of a model that is too simple is that these contrary anecdotes are of a similar nature. For example, when I was in the rent a car business, I noticed variances in interest expense for the car fleet that were all over the map. Interest expense was consistently different than expected, so I knew something was wrong. This was a signal and not just noise. We dug in and became much more detailed about the mix of cars and how that changed (which drives capital cost), and became more tightly synced with fleet management on exactly when and which cars would enter and exit the fleet.
  • Are there conditions under which the model doesn't apply? Most of Thaler's anecdotes that conflict with Fama's efficient market thesis occur when there is limited to no liquidity. But anyone with a thorough understanding of efficient market theory should know the model won't work during these times. (The flow of securities trades can no longer be considered as continuous so the mathematical assumptions underlying the model no longer apply.)

    As another example, underlying most sales incentive plans (all incentive plans, really) is an implicit model of human behavior: the perceived opportunity for greater compensation leads to unlimited greater effort. But changes in the industry or economy - positive or negative - can throw off this model, even though these changes are not typically part of a compensation plan. When the industry is on an uptick, for example, causing sales to take off, sales people who max out their current year compensation will push deals out to the next period. Obviously, this has an impact on revenue forecasts which may suffer the unintended effects of material changes in conditions.
  • Does the anecdote suggest - or do we already know - that the real world is a bit different? Some of Thaler's conflicting anecdotes come from assuming a constant risk premium (this was common thinking some time ago). Now we know, according to Fama, that the risk premium varies substantially over time. So Thaler didn't find conflicting anecdotes; he just confirmed the model was too simple, something that Fama already knew.

    When coming across conflicting anecdotes, check that your model is still current and shouldn't be updated for better and newer logic. Ask around. Note that once you dig in yourself to improve the logic you have decided that what you are seeing is a signal.
All that said, when the conflicting anecdote passes the above test, it might be time to investigate. Keep these things in mind when you do:
  • Make sure the conflicting anecdotes are material or important. By material, I mean a large variance. Importance is for you to decide, but the consequences of a bad or poor model may simply not be worth investigating.
  • When possible, quantify and track predictions. Doing so gives perspective on the meaningfulness of any conflicting data variances. Obviously, this will only work for quantitative models such as financial models.
  • Build an inventory. Rather than assessing conflicting anecdotes whenever you see one, build an inventory for periodic review. If you see a common theme, an investigation is probably needed.
  • With new models, expect and plan for time needed to get the kinks out; there may well be logic or data problems. For example, it was many years after Fama's efficient market hypothesis became mainstream before researchers learned that the risk premium varies a lot over time.
  • Expect to spend more time investigating conflicting anecdotes for non-quantitative models, such as how people behave. In these cases, there is often no basis to assess materiality. Also, the fit of such models to reality is typically much looser than for quantitative models.
Overall, it's important to consider and decide whether conflicting anecdotal evidence truly is a sign of a flaw in your model or simply an aberration. Knowing when we are looking at "signal" rather than just "noise" is a key and often underappreciated business skill.

Please share with your colleagues
Facebook
Twitter
LinkedIn
Email
heardOnTheStreet
Heard on the Street

For more depth on the question of security market efficiency, watch (or read) the conversation between Eugene Fama and Richard Thaler, here, in the Chicago Booth Review.

aboutUs
About Us

Goodrich & Associates is a management consulting firm. We specialize in helping our business clients solve urgent liquidity problems. Our Founder and Principal, Charlie Goodrich, holds an MBA in Finance from the University of Chicago and a Bachelor's Degree in Economics from the University of Virginia, and has over 30 years experience in this area.


To ensure that you continue to receive emails from us, please add
charlie@goodrich-associates.com to your address book today.

Goodrich & Associates respects your privacy.
We do not sell, rent, or share your information with anybody.

Copyright © 2016 Goodrich & Associates LLC. All rights reserved.

For more on Goodrich & Associates and the services we offer, click here.

Newsletter developed by Blue Penguin Development