Saturday, February 4, 2012

The problem with expert judgment

It's a Saturday afternoon, and G and I are sitting drinking espresso and tea. 

I finished reading Daniel Kahneman's Thinking, Fast and Slowearlier this week. Kahneman is an Israeli-American psychologist and winner of the Nobel Prize in Economics.  G and I saw him speak at CUNY a month or two ago. In fact, there is now of a video of that encounter with David Brooks, and it is well worth watching for his funny, humane but insightful remarks.

It is a long, very smart book which sums up much of a life's work on people's mistakes in judgment,  so I am just going to pick out a few things. And here's the shock:

Expert judgement, um, sucks

He has a long series of chapters on when you can trust expert judgement. 

Pretty much never when it comes to picking stocks, he says, although that is not a surprise after three decades of academic debates over various versions of efficient market theory.  And very rarely when it comes to economic or long-term political forecasts, he says, especially by "foxes" who try to relate everything to one big model. That also is familiar to me from Philip Tetlock's book Expert Political Judgment: How Good Is It? How Can We Know? 

But what really struck me was a discussion of a long tradition of research which originated in psychology and medicine in the 1950s. Paul Meehl compared the performance of doctors in clinical diagnosis with simple algorithms with 5 or 6 factors. 

The simple algorithms beat expert judgement almost every time.

Not surprisingly, Meehl’s book provoked shock and disbelief among clinical psychologists, and the controversy it started has engendered a stream of research that is still flowing today, more than fifty years after its publication. The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment. No exception has been convincingly documented. 

Clearly this is a problem for much of the professional classes, so it may not be entirely surprising it is not a better known result.


Why are experts inferior to algorithms? One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it reduces validity. Simple combinations of features are better. Several studies have shown that human decision makers are inferior to a prediction formula even when they are given the score suggested by the formula! They feel that they can overrule the formula because they have additional information about the case, but they are wrong more often than not. 

There is one silver lining in this black cloud for professionals, though. When there are stable regularities intuition may still be very useful. Kahneman discusses a long exchange with another researcher, Gary Klein. Klein studies the intuitive ability of fire chiefs or ER professionals, who may actually have valid and substantive insight. 

The issue may turn on pattern recognition (but only if there really are patterns):

The model of intuitive decision making as pattern recognition develops ideas presented some time ago by Herbert Simon, perhaps the only scholar who is recognized and admired as a hero and founding figure by all the competing clans and tribes in the study of decision making. I quoted Herbert Simon’s definition of intuition in the introduction, but it will make more sense when I repeat it now: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.

So when can you trust expert judgment, or rely on intuition?

Remember this rule: intuition cannot be trusted in the absence of stable regularities in the environment.  ... If the environment is sufficiently regular and if the judge has had a chance to learn its regularities, the associative machinery will recognize situations and generate quick and accurate predictions and decisions. You can trust someone’s intuitions if these conditions are met. 



No comments:

Post a Comment