Nate Silver Didn’t Blow It (and Two Important Lessons for Us)

Recently, I saw an article in one of my LinkedIn groups (Marketing Science) that really struck a nerve with me: “Why Nate Silver Blew It” posted by Jeff Greenfield. I almost never comment on any posts, although my early New Year’s resolution is to share my limited wit and wisdom more. Here’s my comment:

As a regular reader of five-thirty-eight.com, I will defend Nate Silver (not that he needs me to defend him). He and others at 538 have consistently defended their higher (than other outlets’) estimates for Trump and they have been consistently open about their methodology and the uncertainty of the results.

That’s really all you can ask of him or of any analyst. The problem is that too many let the single-point estimate of the results or related sound bites do their thinking for them instead of reading the whole story.

BTW – great tile and article – it got riled up enough to post!

There are two lessons to be learned from this “miss”:

• Models are not designed to offer or be capable of offering “truth” or “perfect” predictions of the future

• The algorithms we use are only as good as the human knowledge that goes into them

The Truth

In 1978, the statistician George Box wrote: “All models are wrong, but some are useful”.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".

In other words, models should only be used as descriptions of reality, not a substitute for it.

Any analytics should come with complete understanding of the assumptions built into it and the limitations of its predictions.

Human Inputs

All models and their interpretations need some form of human intervention. Humans must make sure all relevant inputs are included, or at least acknowledge what might be missing and what that may mean for your results. Humans must figure out the right form of model. Humans must interpret the output. Humans must make the leap to insight. Humans must use it for something.

Brett MathesonComment