In recognition of this 50th blog post, I wanted to zoom out and think about one of the big picture themes of this blog. To wit: “how knowable is the universe?” For a physicist, usually this boils down ultimately to grappling with the related question, “how far can be push reductionism?” Modern science has enjoyed tremendous successes using  as a unifying rallying cry the conviction that complicated phenomena can be explained via simpler fundamental rules. Physicists have become so accustomed working this way that we generally expect toy models and back-of-the-envelope estimates we think up on the spot to be at least approximately right. As noted in Physics Today, scientists in other disciplines, especially biologists, are right when they murmur about “spherical cow” models. They know that life has emergent properties (like being alive), and strongly resists our best effects to reduce it to a list of equations. The development of quantum mechanics, and later, chaos and complexity theory, show that even in the realm of physics, there are hard limits on how much we can know. Some pro-science proponents look at technology and say things like “Airplanes fly, don’t they? How wrong can we be?.” While true to some extent, this does not insure that our textbook explanation of flight is not horribly wrong. Seemingly accurate explanations might only be “brain puns,” as coined by Ian Stewart in Collapse of Chaos. In this book, he also examines the power and limits of reductionism.

It may turn out the real difference between phenomena that we think we understand very well (Newton’s laws or Quantum Field Theory)  and things we suspect may be too complicated to EVER be able to make reliable predictions about (the weather a month from now, psychohistory ) is that in the first category, errors and imprecision in knowledge become less and less important and our best guesses can converge to towards the right answer. In the second category, however, small changes in initial conditions (or knowledge of the laws of nature) lead to hugely different outcomes.

The duality between the “convergent” and the “divergent” can be seen in this picture of smoke, making the transition from laminar to turbulent flow.

Striking Twice

As a life-long Cincinnati Reds fan, I was very pleased that Homer Bailey accomplished the feat of pitching a no-hitter last night. He was also responsible for the last one, on Sep. 28 of 2012 against the Pirates (See list here). Much has been made of the fact that this is the second no-hitter of his career, and even he said:

“Every dog has his day twice, I guess”

Apparently it is much less likely to be a fluke, or simply good luck, to throw two no-hits games, which could be the case with just one. In fact, only 30 pitchers have more than one to their name. To record a no-hitter, the pitcher must record 27 outs without conceding a hit. Opposing players can still reach base via a Walk or Error.

See video of all 27 outs

Notice how Homer only gives up a single walk, which actually helped preserve the no-hitter.

In this case, since batters are hitting .241 against Bailey, the chance that each at-bat (which excludes walks) ends with an out (or error) is p= 1-0.241 = 0.759. Since the game had only one walk and no errors, let’s simplify the situation to make it a series of Bernoulli trials, in which success entails retiring a batter, which occurs 75.9% of the time, and failure is the concession of a hit. How likely is it that 27 outs will be recorded without a single hit? The probably is just (0.759)^27 = 0.00058415479, or about 1 in 1700.

Since there are 2430 major league games every season, and both pitchers have a chance to record a no-no, it is expected that at least one will be thrown every year. This, although an amazing accomplishment, there is still the sense that “luck” played a large role. This is certainly true, since a ball hit in just a slightly different manner, or a great defensive play, are all that separates a no-hitter from a more prosaic game.

So how likely is a no hitter? Not very. According to Bill James, only one pitcher in history was “expected” to throw more than one in his career, given his rate of allowing hits and total starts: Nolan Ryan.



Sometimes, one wonders why Google offers so many amazing services for free. There is a saying in Silicon Valley: “If you don’t know what the product is, then YOU are the product.” In this case, Google collects millions upon millions of text queries, each one providing a little bit more data about what people are interested in. According to the book In the Plex, the short-lived free telephone directory service GOOG411 service allowed Google to collect voice samples to create the voice-recognition engine in the Android smart phone operating system. By running the Universe’s most popular search engine, Google collects samples of queries that allow it to build better and better algorithms for predicting what you are looking for, even before you finish typing. The most direct examples are Google Autocomplete and Spell-Check, but the Andriod keyboard can use your typing history, along with its database of what letters come next in queries, to make very good guesses about what you are trying to say.

A Pioneer in the this field, Claude Shannon, is probably the most important computer scientist (almost) no one has heard of. In addition to his work on artificial intelligence, Shannon came up with the definition of information.

Claude Shannon’s robotic mouse Theseus. The original maze is on display at the MIT museum in Boston

The essential idea is that not all signals will carry the same amount of information. The redundancies in a message contains that could be excised without loss of meaning do not convey any extra information. In a sense, information is the same as the surprise the receive experiences by reading it. For example, if you see me throw a fair coin and ask me which way it landed, by saying “heads,” I am conveying one bit of information – the anwser to a single “yes/no” or “1/0” binary question. Now imagine that you ask me who won the Stanley Cup this year.

Continue reading “Redundant”