It often takes time for new ideas to percolate out of academia into the “real world,” but denizens of the Ivory Tower can remain oblivious to this lag. A stark example is the rapid fall from grace of the Normal Distribution. A huge swatch of statistics and finance has historically been based on assumptions of normal distributions for three main reasons: (1) The math is tractable, (2) The solutions are unique, (3) the central limit theory predicts, that, under certain assumptions, the normal distribution is the one to expect. However, anyone who is familiar with the work of Nicolas Taleb (who called the Normal Distribution an “Intellectual Fraud“) and Emanuel Derman will know that financial models that have assumptions of normality built in can fail spectacularly – as they did during the financial crisis – because they dramatically underestimate the probability of “tail events.” Using a normal distribution, such extreme events are incredibly (exponentially) unlikely, but, of course, they do happen in real life. Often this is because of contagions that spread throughout the system or some other previously unknown phenomenon that decided to pop up. Lately, even Jamie Dimon, the head of JP Morgan Chase took a backhanded swipe at the models by saying that recent fluctuations in the US Treasury market were unprecedented and only “an event that is supposed to happen only once in every 3 billion years or so” . One host on the Slate Money Podcast (Jump to 36 minutes) chided him for making such a hackneyed point. Of course everyone knows that normal distributions don’t work here! Except, as another host was quick to point out, the entire edifice of standard “Value at Risk” models, which are still the industry standard to deciding how much risk is too much, is still built on assumptions or normality. So don’t be so fast to assume that the news has spread.