Entropic Force

I’m very interested by the concept of an “entropic force” – that is, when the tendency for a system to move towards a state of larger disorder (entropy) manifests itself in a form that seems, for all practical purposes, to be a new force of nature, even if there is no “real” force acting. An important example (maybe) in all living organisms is osmosis, which, according to an new article by Eric Kramer, is probably misunderstood by almost everyone:

(1) “The first misconception is that osmosis is limited to liquids,” he says. “But it works just fine for gases, too.”
(2) “Another misconception that osmosis requires an attractive force,” he says. “It doesn’t. When water fills the bag of sugar, it’s not because the sugar is pulling the water in. That’s not part of the explanation.”
(3) “A misconception is that osmosis always happens down a concentration gradient,” he says. “When you dissolve something in water, the water doesn’t necessarily get more diluted. Depending on the substance, it can get more concentrated.”
(4) “Anther misconception is that you don’t need to invoke a force to explain why the water flows into the bag. It’s thought that, like diffusion, it’s a spontaneous process,” he says. “But, in fact, there is a force. It’s complicated how it happens, but it turns out that the membrane – or the bag, in the familiar lab demonstration – exerts a force that pushes the water in.”

A better way to understand osmosis is to think about how the system can minimize the free energy. Although it appears that the system is an a high energy state while it supports an unbalanced column of water of height h on the left side (see figure), by allowing water to flow from the hypotonic to hypertonic side of the membrane, the free energy is reduced because the entropy of mixing is increased between the water and the solute. This explains why there does not need to be any “attractive force” between the water and the solute, but it can be reversed, depending of the details of the thermodynamic situation. The process continues until it is balanced by the regular gravitational potential energy of the excess water column. The author claims that there IS in fact a force acting… its the force of the membrane keeping the solute on the left side. This force then gets transferred from the solute molecules to the water.  This is in contrast to diffusion, in which no (directed) force acts at all… just the random motion of the molecules creates the appearance of a force that disperses the molecules.

Ask my Agents

Since the first incarnation was released to the world in 1989, aspiring city planners have used SimCity as the canvas on which to paint their wildest urban design dreams. The latest installment has received strong reviews (at least, when operable) with many features not previously available. I have a special interest in a major change to the fundamental mechanism by which the denizens 0f SimCity are represented by the game. Heretofore, all statistics were aggregate – that is, the simulation computed average crime and average unemployment in a given regions based on the present conditions,  but no individual Sims were present to actually go to work or commit crimes. This has changed in the new version. Each Sim is an individual agent (a la The Sims) with responses to stimuli. As noted in this Penny Arcade webcomic, you can peek in on the activities of your individual charges:

 

I think it is hard to overstate the importance of this change. One of the major critiques of Keynesian economics articulated by the Austrians like Russ Roberts is the emphasis on lumped variables, like total GDP or aggregate demand, which do not distinguish what made up those values. The classic line is, you can’t go to the store and buy a box of aggregate supply. In a similar way, the behavior of individual people, even with identical preferences, will depend of their particular circumstances.

Jensen’s inequality, put simply, says that the average of a function will only be equal to the value of the function at the average if the function is linear. The books The Flaw of Averages and Antifragile both make this point very distinctly. A statistician, as the joke goes, can drown in a river that is, on average, only 2 feet deep if its has even one section 12 feet deep. Similarly, one hour in freezing weather does not cancel out an hour in blazing desert, even if the “average” temperature is a perfect 72 F.

Agent based modelling is coming in to vogue now that we have the increased computing power to handle the problem. One of my favorite example comes from the study of mortgage refinancing during the heyday of collateralized debt obligations, which consisted of bundles of bonds representing the right to collect the mortgage payments as they came in from various homeowners..  Falling interest rates lead to surge in refinancing – which benefited the homeowners but was not good for the holders of the mortgage  since this involves an early repayment of a debt that had be contracted at an higher interest rate. It was noted, however, that these refinacings mostly occurred early on. This is hard to explain if one thinks about the “average” propensity for members of the group to make the effort to refinance when interest rates made this the favorable course of action. In reality, each homeowner has his or her own threashold for refinancing, based on attention paid, tolerance for the hassle involved  and particular financial situation. (Since collecting information is itself costly, sometimes rational inattention is really justified). Whatever the reason, there would always be some people who would never refinance no matter how low interest rates fell, and others that would do so at the first opportunity  The early refinancers would therefore self-select themselves out of the pool early, leading to a “burn out” in which the holdouts were likely to stick around to the end. The end result is that using a single rate of exit (as in an exponential decay) to model the homeowners leaving  the pool and making it constant in time leads to a systematic undervaluing of the CDO. An agent-based simulation makes much more sense.

 

StumbledUpon

I remember as a child the surprise I felt when I learned that most medical drugs were discovered by accident. The most famous example, of course, is penicillin, not just because it was so momentous for human health, but also because it was totally unexpected. In a good example of fortune favoring the prepared mind, we give credit to Alexander Fleming for essentially not just throwing away spoiled bacteria cultures, and instead investigating further the ability of a fungus to secrete molecules we now call antibiotics. In a larger sense, though, the actual “discoverer” was an ancestors of the fungus living millions of years ago, that, it an immense stroke of luck, hit the evolutionary jackpot via a random jumble of DNA that encoded the instructions to make penicillin that allowed it to beat back its bacterial foes. The random tinkering of evolution is the real innovator, with Fleming’s accidental discovery coming much later.

In an interesting twist, we have now learned so much about how the world works that some now look askance at this kind of serendipitous discovery, even though it remains vital to finding news drugs even now. Sort of like Thomas Edison, who found thousands of materials that DON’T work as lightbulb filaments, current pharmaceutical companies are running as many molecules as they can lay their hands in “high-throughput screening” for anticancer properties.   A long blog post from Scientific American discusses the prejudice some physicists show towards this mindset, and, to be truthful, chemistry in general, since it is based on:

“a diverse mix of skills that range from highly rigorous analysis to statistical extrapolation, gut feeling and intuition, and of course, a healthy dose of good luck”

That is to say, reductionism has worked so well for physicists that the rules-of-thumb that are employed daily by chemists look shockingly incomplete. Every empirical relationship is calling out for an explanation based on first principles, we say. For example, Newton looked at  Kepler’s Laws, which were really just relationships noted by pouring over decades of astronomical observations, and explained them all by introducing a law of Universal Gravitation. But just as adding a third body, say, a moon, to the system of  the sun and the Earth, vastly increases the complexity, chemistry is similarly a “many body problem” that resists simple equations, even though the principles of physics remain a the bottom of it.


The joke is that, as in some hunter-gatherer societies, the “many” in “many body physics” means more than two.

Some progress has been made in computer simulations – the so called “in silico” experiments – at explaining properties of molecules, but it is important to remain cognizant that for some complex systems, we should count ourselves lucky if we can discern even empirical relationships. In the wake of a financial crisis, that was brought on in large part because too much trust was placed in complicated models, we should remember the example Nassim Taleb brings in Antifragile of the very successful trader in Green Lumber who made large sums of money over many years on the commodity before it realized that he was buying a selling recently cut trees, not wood painted green.

It has been theorized that Medieval Vikings used naturally bifringent calcite crystals called Sunstones to navigate at sea. According to these historians, the Vikings did not need to know anything about the physics of light polarization; with trial and error, they discovered that it was possible to locate the position of the sun on cloudy days using these seemingly magic stones.

In a piece for Slate, Samuel Arbesman wonders if, with the help of computers, we will discover that the Universe contains relationships that will be forever be beyond human ability to comprehend on any level beyond simply finding that they exist. That is, we will never have our elusive “grand theory” of everything. I am not so pessimistic that such a state of affairs is inevitable, but I do think that we need to make sure we strike the right balance between “thinking” and “tinkering.”

On one level, it is true that chemistry is just “applied physics,” although, according to this thinking, one could argue the same for biology, psychology, and history, although I’m sure chaos theory would be invoked long before the conversation got to that.

Let me close with another snip from the Scientific American blog:

“…most of theoretical physics in the twentieth century consisted of rigorously solving equations and getting answers that agreed with experiment to an unprecedented degree. The tremendous success that physics enjoyed in predicting phenomena spread over 24 orders of magnitude made physicists fall in love with precise measurement and calculation. The goal of many physicists was, and still is, to find three laws that account for at least 99% of the universe. But the situation in drug discovery is more akin to the situation in finance described by the physicist-turned-financial modeler Emanuel Derman; we drug hunters would consider ourselves lucky to find 99 laws that describe 3% of the drug discovery universe.”

Self

One of the ironies of our modern lifestyle is that some of our most worrisome ailments are autoimmune diseases, like lupus, multiple sclerosis, or Crohn’s disease. In these cases, as with allergies, we would rather have less robust immune response. It is thought that even heart disease is really a problem of out-of-control inflammation. Contrast this with the vast majority of human history (and developing countries today), in which infectious disease are of primary concern, and a more active immune system was a boon, rather than a liability. In addition, some of our most promising medical treatments, like implantable sensors, or even organ transplants, are greatly hindered by natural processes like biofouling and host-vs-graft responses. The ability to selectively turn off the immune response, without generally decreasing its effectiveness against actual pathogens, would be very valuable. A recent paper shows a possible method that uses the body’s own “self-recognition” protein. This is a very promising approach, since the immune system must have an innate mechanism to distinguish “self” from “invader,” and it appears that these researchers have been able to crack that system.