# Month: November 2012

## Antifragile

I’m not usually one to consume media specifically on the first day it becomes available, but I’ve been anticipating Nov. 27 for quite a while now. Today, Nassim Nicholas Taleb’s new book, Antifragile, came out and I’ve been listening on Audible . Taleb’s previous books, Fooled by Randomness and The Black Swan, have set the stage for this latest work, in which he continues his strident critique of those who claim to be able to predict (or control) the future. Instead, he argues, we should accept random systemic shocks as a given, and learn how to develop systems that get benefit, rather than harm, from volatility. As a former trader, Taleb sometimes invokes the property of convexity, where holding an financial instrument like an option allows one to benefit from the upside of an investment if it succeeds, while being protected from the downside if it fails. He using the “tinkering” and experimenting done by living organisms in the process of evolution as the arc-typical example. In order to survive, species must adapt by “trying out” various random genotypes. There is a large benefit to hitting on an adaptive trait- the ability to reproduce so your offspring will likewise be endowed – while the unlucky others simply die out.

## Quote

## Getting to St. Petersberg

I really liked this blog post on the St. Petersberg Paradox. It highlights something that is often overlooked when we think about randomness: the fact that system may not be ergodic, in the sense that the “average” behavior may not be the same if you are thinking about an average over space or an average over time. We usually think these are equivalent. for example, if you were a producer of a movie and you needed an establishing shot of an open ocean, you could take a helicopter and hover over one patch of water with your camera fixed (time average). If you were making an Indie film,and your budget didn’t allow for a helicopter, you could get a large photograph of the ocean and pan your camera over it (space average). Your audience would have a hard time distinguished these two cases. That means the process is *ergodic*. I first heard the word when I was introduced to Hillel Furstenberg (one of the only people I have spoken to on multiple occasions that has his own Wikipedia page). But non-ergodic randomness exists, and can create the seeming paradoxes if we are not ready for it.

## We are all Bayesians Now

Nate Silver, head mathemagician of the 538 blog has won a great deal of attention for his perfect 51 for 51 (including DC) state-by-state predictions of the most recent presidential election. This has lead to a spike in sales for his book, which covers a wide range of topics related to the business of forecasting and the standards we should require for statistical evidence before we take predictions seriously. One of the issues covered in the book, but generally overlooked, is a subtle but important shift occurring in science. This is the reduction in emphasis on statistical significance, and and rise to prominence of Bayesian analysis. The webcomic xkcd nails it in two comics:

This second one came out recently:

The point is that predictions for rare events (like the sun exploding) can easily get swamped by false positives. This is a famous problem in statistics called the Base-rate fallacy which has ensnared many people. For example, a medical test that correctly indicates the presence of a disease 99% of the time (sensitivity) and also correctly returns a negative result for 99% of healthy people (specificity) will NOT be as definitive as most assume if the disease is rare enough. Let’s suppose the base rate of the disease (the “prior probability”) is one in a thousand. If 100,000 people are tested (100 sick of whom are actually sick, and 99,900 healthy), 99 will be correctly labeled as sick, but so will 999 healthy people (99,900 x 0.01) who receive false positives. Therefore, the chance that someone is really sick given that he or she tested positive is still only 9% [= # of True Positives/# of all Positives]. In modern life, we test many propositions that have only a small chance of being correct: pseudo-science, novel scientific theories, new particles created by a collider. The solution is to use Bayesian reasoning, which takes into account the prior probability that a proposition is true. Extraordinary claims require extraordinary evidence, precisely because the chance that so many established principles, that would be overturned by accepting the new claim, is so low to start with.

The established method, however, achieving almost dogmatic stature in science until recently, is the use of “statistical significance,” in which an arbitrarily chosen threshold (usually 5%) is chosen, and a proposition is accepted (or at least “fails to be rejected”) when the odds are such that data “at least as good” as what the experiment obtained could have been simply due to chance is less than the threshold.

As Silver points out in the book, among the reasons “statistical significance,” gained such a stronghold (so much so that it is much more likely that scientific results will be submitted for publication if they could have be obtained by chance 4.9% of the time vs 5.1%) is that is doesn’t require an estimation of the prior probability. This was seen as being more objective somehow, especially when there is no obvious base-line rate to use. However, as the xkcd comics show, this way of thinking can lead to absurd results. Better to use Bayesian methods, as Nate shows.

## Special Cases

The purpose of physics is to describe as much of nature as possible in as few equations as possible. Scientist dream about reducing all phenomena to a single master equation, preferably one that fits on the back of an index card. (This is not as crazy as it sounds: Maxwell’s Equations, which tell you everything you need to know about electricity and magnetism, can be condensed into a single tensor expression). For this reason, it bothers me a little when some laws, which are really just special cases of other, more general laws, are taught as separate concepts for no compelling reason (besides historical accident). Usually, these specific laws have names based on who discovered them first (or, at least, discovered them later but ended up with the credit anyway). For example, Pascal’s law for fluids says that the pressure of an enclosed incompressible fluid of uniform density is the same for all points at the same depth (and increases linearly as the depth increases). As we have seen, Blaise Pascal, was especially talented at getting things named for him. Along with his triangle, he also has a wager and a theorem. But in the case of the fluids, Pascal’s law is nothing more than a special case of Bernoulli’s principle, where you require the velocity of the fluid to be zero (hydrostatic). Even worse is Torricelli’s law, where you just set the outside pressures to be constant.

What is interesting about both Pascal’s and Torricelli’s discoveries is that fluids can be used to transmit energy. In the case of Pascal, pressure can be transmitted. This is the basis for hydraulic lifts:

For Torricelli, the velocity of the water coming out of the spigot is equal to the velocity the water would have if you let it fall the same height as from the top to the spigot. Indeed, even though it is not the same water molecules, since the end result is the same (the water level drops and water flies out a certain distance below, conservation of energy requires that the velocity be the same).