Statistician/philosopher Nassim Nicholas Taleb critiques “behavioral” economics and finance as he looks at the differences between “binary forecasts” and “real world payoffs,” in a recent paper for the International Journal of Forecasting.
Much of the argument will be familiar to those who have some acquaintance with Taleb’s work as it has developed over the last 20 years. He begins with a hypothetical conversation between a trader and a manager. The manager asks the trader whether he believes the market is headed up or down. The trader says “up.” The manager discovers, though, that the trader has a short position. Ah, contradiction!
Well, no. As Taleb explains on the trader’s behalf, it may be perfectly rational to short an asset if you believe it is most likely headed up. The up/down question is binary. In the real world, though, the payoffs are asymmetrical. There may be a 60% chance that an asset’s value will increase (somewhat) over the coming period. But it may also be the case that, if the 40% possibility of a downward move does eventuate, the downward move will be a very large one. In that situation, a short position may be rational. More dramatically, a low-priced out-of-the-money option can be very attractive even if it has only a one in a thousand chance of paying off.
Payoffs, in a word, are non-linear.
This, as noted, will not be new to many readers of Taleb’s work. It ties in with his calling-card view that the tails of probability curves are a good deal fatter than those of the Gaussian bell curve that has become our “common sense.” The fatness of tails, in contrast with our expectation of thin tails, is why we are constantly, in Taleb’s phrase, “fooled by randomness.”
Taking on Behavioral Economics/Finance
In the new article, Taleb is taking on a feature of contemporary “behavioral economics” expounded for example by Kahneman and Tversky in their 1979 paper on “prospect theory,” and by many others working through the financial implications of their theories since. The behavioral contention is that people tend to err by imagining that the tails are fatter than they usually are. The difference is stark. The behavioral school thinks we imagine overly fat tails. Taleb contends that we are biased in the other direction, we tend to believe in the Gaussian thin tails. If either view is right, then the smart way to bet is against the bias, picking up the money that the victims of the bias are leaving on the table. So, which view is right?
Taleb says that Kahneman and Tversky are guilty of what he calls the “ludic fallacy.” The behaviorists derive their views through psychological experiments where participants play games carefully observed by the grad students, who report the game-playing behavior to a professor who writes it up as a scholarly article. Generalizing from games to the real world is the problem.
“The psychological results might be robust,” Taleb says, “in the sense that they replicate when repeated … but all the claims outside these conditions and extensions to real risks will be an exceedingly dubious generalization….”
What may be an even more important point—you can overestimate the likelihood while underestimating the payoff. There may be two different “tails” involved. You can think the possibility of a crash is larger than it is while also underestimating the gain that would arise from betting on that crash.
Morgan Stanley and Machine Learning
Underestimating the nature and extent of a loss can be disastrous, even if you’re right about the binary question of an asset’s direction. As Taleb observes, “In 2007 the Wall Street firm Morgan Stanley decided to ‘hedge’ against a real estate ‘collapse,’ before the market in real estate started declining. The problem is that they didn’t realize that ‘collapse’ could take many values.”
It didn’t work out well for them. Morgan Stanley’s hedges would have worked in the face of a modest decline, but in the end, Morgan Stanley lost $10 billion in the face of the one they got.
Taleb also offers some good news for risk managers—machine learning is on the right track. There are “various machine learning functions that produce exhaustive non-linearities,” that is, that do control for the fat tails. They do this through “cross-entropy.”
Cross-entropy is a concept taken from the field of information theory, the difference between two probability distributions. This can be employed (as Taleb explains in a footnote) to “gauge the difference between the distribution of a variable and the forecast,” which captures the non-linearity of a payoff function.