Biological Warfare

I just finished reading Cooked by Michael Pollen. As usual, the author uses his engaging prose to make yet another food-related topic riveting. In addition to discussing the chemistry of cooking, the book discusses the process of fermentation, which uses the natural metabolic processes of yeast or bacteria in order to make food more nutritious and less susceptible to spoilage. In fact, it could be said that the manufacture of alcohol or cheese “sets rot against rot,” when domesticated microorganisms deliberately poison their environment  to make it less hospitable to competitors. We just co-opt this process and enjoy the resulting food. Actually, the distinction between brewed (beer) and distilled (liquor) alcoholic beverages is simply the fact that the alcohol-producing yeasts poison themselves beyond a certain concentration, requiring extra steps in order to increase it further.

The key point, however, is that we did not create the enzymes for the process of converting starches into ethanol… we stole them from nature. Similarly, antibiotics aren’t really “invented” as “discovered,” since cutthroat Darwinian evolution has randomly hit on solutions far beyond what we could formulate on our own:

Join the Queue

In queuing theory (yes, there is such a thing), there is a strange result. If the rate that a company can satisfy the needs of customers is exactly equal to the rate that the customers randomly walk up to the counter, than the wait time will diverge to infinity (not that anyone waits for an infinite time, but that the queue gets longer and longer without bound). The simple explanation is that the time that the counter was idle with no customers is forever wasted. Recently, I was reminded of this principle when I walked up to a counter that was idle (no line), and by the time I was finished – probably less than two minutes later – there was a line of two dozen people waiting to be served.

Systemic Problems

One of the topics we are emphasizing in physics lab this week is the importance of understanding the different kinds of uncertainty. There seems to be an almost magical power involved in taking the average of many measurements, and condensing all that uncertain data into one best guess, and statistics tells that the most probable “true” value is the mean of your measurements. In addition, you can quantify your uncertainty by measuring the standard error basically, the standard deviation divided by the square root of the number of measurements – and again, statistics will provide you with a range of the mean plus or minus the standard error as a confidence interval. Even though your measurements will necessarily be imprecise, no matter how careful you are or how expensive your equipment is, the more measurements you make, the smaller that interval gets. However, there are a couple of caveats. First, the standard error depends on the square root of the number of measurements, so to increase the precision by a factor of 2, you have to quadruple the number of measurements. Second, and more critically, you can only eliminate standard errors this way. You cannot defeat a systemic bias this way, no matter how many measurements you make. These flaws in the experimental setup are immune to the magic of statistics. In fact, this is a case where you hope for randomness – that is, you want the errors to be uncorrelated, as opposed to constantly biased one way or the other.

Sometimes we express this difference between standard error (the probabilistic error since you cannot make an infinite number of measurements) vs. systemic error (which is an inherent bias in your results), by defining “precision” and “accuracy” as not quite synonymous. Really, accuracy means the absence of a systemic bias, so the values you measure are close to the true value. In contrast, precision means how close together your individual measurements are, which will give you smaller standard errors.

Evolving Gears

The big news in science this week are tiny mechanical gears, found in nymph form of the the plant-hopping insect called issus. They help the insect achieve incredibly quick jumps (acceleration up to about 400g) with synchronization faster than even the speed of nerve impulses. The gears, which are lost when the insect undergoes its final molt to become an adult, are designed somewhat differently than the gears used in modern machines. I was reminded of a topic in evolutionary biology that considers the limits to what is “evolvable” in practice. We do not find “irreducible complexity” (by definition) in nature, but not all possible beneficial features are accessible to evolution. A classic example is the lack of “wheels” in living organisms, which has received quite a bit of thought (like here, here, and here). Before this discovery, I probably would have lumped mechanical gears in the same category, either as being too hard to evolve at all, or not worth the trouble. But apparently this was not the case!

Return to St. Petersberg

I wrote before on a surprising resolution to the famous St. Petersburg Paradox using the fact that the game is non-ergodic. The term ergodic refers to systems that explore their “phase space” well enough so that an average over all of phase space would be equal to an average over time.  Simply put, in the St. Petersberg wager the expected value, which is the gain averaged over playing the game many times, is not representative of how a person would actually experience the game, since no one would (or could) play the game enough times. That is, you have a really tiny chance of winning a lot of money, and these effects balance just enough when calculating to give a series that diverges to infinity. Recently, I came across another site that uses the same approach to solve the paradox, but extends the reasoning in an attempt to answer a more general question. The wager described in the St. Petersberg paradox is just one example of a gamble that has a positive expected value (in fact, infinite expected value), and yet we “know” that we should stay away. This contradicts the basic premise of rational self-interest, which would seem to encourage taking any wager in which you expect a positive average return. The behavior heuristic of risk aversion developed in humans,  for, among other reasons, the simple fact that if we lose enough times in a row we will be out of the game, and not have the opportunity to try to win it back (“gambler’s ruin“). We might then define a “risky” game as one in which there is a strong probability of being wiped out from one bad round or a brief streak of your luck running cold. The author of the blog post points out that no matter how favorable the odds, if you play a risky game enough times, you will lose your shirt. Even a well thought-out strategy for betting on games in your favor, like the Kelly Criterion, which basically says that you should wager a fraction of your bankroll equal to your “edge,” or how favorable the odds are, divided by the odds, will leave the gambler at risk of losing everything. That is, you still have to balance your desire to win quickly when at an advantage with the reality that sometimes “The race is not always to the swift.” Interestingly, the system was developed by J. L. Kelly, Jr, when he found a way to count cards at blackjack so that he would have a slightly better than 50:50 chance of winning.

No whammies!

I’ve been thinking about the way most wagers are structured by casinos in order to get around this risk aversion. Usually, your bet is a small fraction of your net worth (eg. penny slots, or one dollar lotto ticket) with a small chance to win a big jackpot. Rarely do you see a bet in Vegas like 99% chance to win one dollar, 1% chance to lose 110 dollars, and not just because of the house edge. Even a fair game with so much downside would trigger the risk-aversion klaxons in patrons’ minds. However, if you are so inclined, you can structure your betting so that you have the equivalent wager. Using a martingale system, you can win $1 with very high probability, with a small chance (depending on how much money you start with) of losing it all. All you have to do is play a game for even money starting with a $1 wager. If you win, stop and enjoy your unit increase in wealth. If you lose the first game, play again but increase the stakes so that you will be up $1 overall if you win. While it might seem like easy money, if you keep repeating this system, eventually you will have a losing streak big enough to wipe you out. Nicolas Taleb describes stock traders who did essentially the same wager using highly leveraged positions. He calls it “picking up pennies in front of a train,” in the sense that each time you make a little gain, but eventually a “tail event” will arrive to blow everything up, with a loss more than overwhelming all of the previous gains. Many traders were lulled into a false sense of security, in which small and steady gains carried a very real, but unseen, risk.