Few things are more exciting to me that an intellectual throw-down between minds I respect on topics of fundamental importance. Recently, Nate Silver, who makes his living building prediction models, was under fire from Nassim Nicholas Taleb, who makes his living breaking them down. At issue was the foundational epistemological question of how much we really know, or rather, how confident we may presume in what we think we know. Taleb – who is famous for his books Fooled by Randomness, The Black Swan, and Anti-fragile – lives by the watchword “incerto” which means we should be much more circumspect about what we think we understand. He popularized the concept of a “Black Swan,” a momentous event that could not have been anticipated based on all previous observations. Consistent with this, Taleb has successfully implemented strategies that benefit from extreme, unexpected events that have previously been undervalued. Taleb has long inveighed against the widespread misuse of Gaussian distributions, especially those implicated in the 2008 finical crisis. Nate Silver’s models are much more sophisticated, so I was interesting when Taleb came after them:
1/@FiveThirtyEight : 55% “probability” for Trump then 20% 10 d later, not realizing their “probability” is too stochastic to be probability.
— NassimNicholasTaleb (@nntaleb) August 6, 2016
The basic idea is that the prediction probabilities at 538 for each candidate to win are too volatile to be right. Instead, Taleb suggested a model based on his specialty, pricing options.
The result is a much more parsimonious model, in which the chances are stuck at 50/50 until right before election day, at which time they jump to near certainty:
Silver’s podcast riposte was swift and harsh, saying that when properly modeled – particularly by correctly accounting for the correlations between states – data from polls are actually very good predictors of the future election result. Especially after the conventions, the probability for a candidate to win can be forecast we a least some confidence.
This debate reminded me of Sean Carroll’s new book, The Big Picture. In it, he makes the astounding claim that the “Core Theory” of physics, which includes quantum field theory and relativity, can explain every experiment ever performed on Earth.
The reason for such incredible predictive power is that quantum field theory itself provides a recipe for including progressively smaller correction terms, which converge to a very good degree of accuracy. Carroll says that physics is “simple” compared with other fields, like biology or economics, in that reductionism works fantastically well, and every particle, and interaction between them, can be understood separate from the rest of the Universe. This is why the dimensionless magnetic moment of the electron is known to an uncertainty of a few parts per trillion.
This is, of course, not to say that we have (or will ever have) the functional omniscience of Laplace’s Demon, who can turn perfect knowledge of the laws of nature and the configuration of all particles at any time to perfectly predict their positions at every other time. The wild undulations of chaos theory, and the inherent uncertainty in quantum mechanics, preclude this. So perhaps the best approach is the one Carroll himself takes in “The Big Picture” – a Bayesian framework in which we are aware the credences we apply to various propositions and work to keep them up to date as new information becomes available.