It has been known for centuries that metal objects, such as swords, can be strengthen by a process of heating and then cooling called annealing. Much later, the reason was found: Heat gives the impurities and crystal defects inside the material enough kinetic energy to rearrange. This is somewhat like banging the side of a TV to make it start working again.

Annealing is a solution to a more general problem. Materials can get stuck into a “local minimum,” which is a higher-energy (that is, less stable) state than the “global maximum,” but it cannot reach the true lowest state because it is trapped in a state from which it would have to pass through even less favorable states in order to get there. The solution is to increase the temperature  which kicks the system out of its rut. The irony is, of course, that the right solution is found by adding “randomness” in the form of increased stochastic motion.

Simulated annealing is an extremely powerful computational technique to solve a similar problem. One embodiment, called the Metropolis Algorithm, tries to find the lowest energy state of a system by successively searching adjacent configurations. In this case, the “temperature” represents the chance that a state is considered even though it has higher energy. This allows the program to escape from local mimima and perform and more effective search.

A recent Freakonomics podcast touches on this problem. People often feel stuck in unpleasant situations but are not motivated enough to make a change, either because of a status quo bias, a fear of sunk costs, or loss aversion for what they would have to give up (or combination thereof). Example abound, with people in jobs, cities, or relationships that are bad but not terrible enough to leave. In these cases, adding some randomness may help break the deadlock.

Buridan’s Ass” is a famous logic puzzle in philosophy in which it is rational to make a “non-rational” decision at random rather face the paralysis of two equally good choices.

It seems to me that there is natural tendency to set up decision hierarchies so that a single executive (be it Commanding Officer, CEO, President, Football coach) is given the final word makes sense not so much because a single person is better at making decisions than a group. Rather, there is value in having one definitive answer, even if it turns out to be sub-optimal. That is, the danger in getting stuck vacillating  between completing plans is avoided, even if the very best path is not taken.


In a recent issue, a venerable magazine asked a provocative question: Has the pace of innovation begun to stagnate? This might seem paradoxical that in the age of hot internet IPOs and instant communication. In fact, productivity (that is, output per worker) was up during the most recent recession, as businesses did more with fewer employees, which might be attributable to the use of computers.

This reminds me of the proposition that the last real invention was the telegraph. The first practical electric communication device neatly separates the era of human history into a time before, when all messages had to have a physical embodiment (a cuneiform tablet, a piece of paper, or someone’s memory) transported to the recipient’s location, and a time after, when information moved at (almost) the speed of light. I’m eliding over the various semaphore systems, which had many practical limitations, including susceptibility to sabotage. In any case, the telegraph revolutionized how we traded, fought wars, and found spouses – and it might be argued that later advances in electric communication, like telephones, radio, and the internet – are all comparatively minor improvements. In fact, since computers run on a binary system, any program or app or webpage could, in theory, be transmitted by telegraph, assuming you were willing to wait a few days to see a retweeted picture of someone’s lunch. Indeed, another irony of the cyclic nature of technology: We might feel so much more advanced than our great-grandparents by wielding fancy smart phones, although we basically use them to send text and tweets that are just glorified telegrams. As if to prove my point, NASA has just  beamed an image of the Mona Lisa to the moon (actually, to the Lunar Reconnaissance Orbiter). By representing the grey-scale pixel values as laser beam pulses, the entire image could be reconstructed. Some have pointed out that this makes it a very long distance 300 baud modem.

At the Consumer electronics show this month in Las Vegas, some complained that, while impressive,  the new technologies presented seemed to be mere incremental improvements rather than exciting breakthroughs. I think that this drastically underestimates the importance of  gradual advances (see previous post). Exciting as new concepts are, the value comes when products are perfected over time. Kinks are worked out, designs are improved, and perhaps most importantly, we find out which features are really important. The iPod was not the first .mp3 player; it was the first good .mp3 player that people wanted to have.


A Gene For Everything

On a recent episode of the SGU, the Dr. Novella was expressing the opinion that technological advances often experience a lag of about a decade between the “hype” and the actual results. That is to say, we only make breakthroughs once we have already given up. As someone coming of age in the the Human Genome project, I can distinctly remember the excitement about finding “genes for everything,” single specific genes we could put into a 1-to-1 correspondence with human traits. And not just physical traits either, like hair color (boring!)… no, all traits, even behaviors like having a clean desk at work or voting Republican. We would then be able to modify these genes at will, curing all sorts of inherited diseases. Then the disappointment  as we found out that not only was the human genome more like a recipe than a blueprint, which many genes interacting in complicated ways with the environment to create variations, but also ran into some major obstacles when testing gene therapy. However, there have been some amazing benefits to gene manipulation, like insulin from bacteria with recombinant DNA . That is, we can make E Coli bacteria into drug factories, just by slipping in the right genes. [It’s not just biomimetic, it’s biokleptic!] But now, after having despaired of finding the “gene for” a behavior, scientists have been able to trace the genetic origin of tunnel-building traits in a certain kind of mouse. Even so, heredity explained only 30 or 40% of the behavior. Still, this is an exciting piece of evidence to throw on the gigantic pile that is the nature-nurture debate. (I always thought the answer to that particular argument was short:  “it’s both, interacting in complicated, non-linear ways,” but that still leaves a lot of space for filling in the details)

Bad Data

One of my favorite podcasts is Econtalk by Stanford Economics professor Russ Roberts. In the latest episode, he interviews researcher  Morten Jerven, who wrote a book about the difficulties in obtaining good economic data for countries in Sub-Saharan Africa. They are both of the opinion that the large uncertainty in the data makes virtually meaningless a large number of complicated regression analyses. I think this is a problem in science in general. We might know that some data collected is suspect, for whatever reason, be in experimental error, inherent limitations of the method, or simple variability in results. However, there is a large physiological pull to set aside these caveats and just go with the numbers, perhaps reasoning that, while not perfect, they are “better than nothing” or “the best we have to go on.” We will tend to give too much credence to data even when we know it is faulty. This is a special case of the well documented mental bias called anchoring. To combat this, we should first think of the computer aphorism GIGO (“Garbage in, garbage out“). Compounding the problem is the special credence lent to anything with equations in it, regardless of either the quality of the mathematical model OR the data put into it.

Memetics of Misquotations

Captain Kirk never said “Beam Me Up, Scotty.” Sherlock Holmes never said “Elementary, my dear Watson.” Rick Blaine (as portrayed by Humphrey Bogart) never said “Play it again, Sam.” At least, none of these exact formulations were used. The book “Made to Stick” notes that in these cases, the misquotation sounds plausible because they combine fragments that these characters (memorably) say all the time into single, distilled sentence. In a sense, they are “better than real,” since they pack a large punch into an more pity package.  It is also interesting that, in each of these examples, there is a supporting character (Mr. Scott, Dr. Watson, or Sam the Pianist) being addressed, and always at the end of the purported quote. In The Selfish Gene, biologist Richard Dawkins coined the term “meme” to refer to a word, idea, or other piece of culture that might be thought of as reproducing via a sort of natural selection. Fitness here would be defined as how effective rival versions of memes are at persisting in, and jumping between, human brains. For example, Nice Guys Finish Seventh tells the story of how the ubiquitous sentiment “Nice Guys finish last” is a reworked version of a not very sticky quote by a baseball manager talking about a rival team:

Leo Durocher is best remembered for saying, “Nice guys finish last.” He never said it. What the Brooklyn Dodgers’ manager did say, before a 1946 game with the New York Giants, was: “The nice guys are all over there. In seventh place.

The same techniques used to figure out the most probable evolutionary (Phylogenetic) tree of various species has been implemented to study texts ranging from “The Canterbury Tales” to Nigerian Spam emails. In these cases, copying errors, or willful “improvements” act as the mutations.