Schrodinger’s Cheshire Cat

Is Schrodinger’s Cat old hat to you? Made peace with the idea that particles can be in two places at once? These ideas are so well established that Microsoft  is already advertising quantum computing, which is based on bits that can each be 1 and 0 simultaneously (although don’t go looking for qubits at Best Buy for the time being). Well, the Universe has plenty more weirdness where that came from. Get ready for the “Cheshire Cat” effect – the separation in space between objects and their properties. The name come from Alice in Wonderland, in which the smile of the Cheshire Cat remained even after the cat itself was gone. Imagine being able to throw a baseball so that it took a different trajectory than its “spin.” That’s like opening a whole new box of crazy.

But that’s what Nature allows, under the right conditions. In new research, exciting, not-completely understood concepts like “post-selection” and “weak measurement” have been used to watch neutrons fly down one path, while their magnetic moment goes along a different path.

Cheshire Cat
This is actually a figure from the paper

 

Quick digression: Quantum mechanics seems strange to us because the more we learn about the rules of physics that apply on small scales, the more we realize how vastly they differ from the rules that our brains are accustomed to, living at the human scale. By now, some quantum weirdness has already percolated within the cultural consciousness, with the most famous being the Schrodinger’s Cat thought experiment. The hypothetical situation illustrates the cognitive tension that pops up when we try to extend the idea of quantum superposition – which has been experimentally confirmed on at atomic level countless times experimentally – to the macroscopic world of people and cats.

 

Since the The Big Bang Theory is an extremely popular sitcom (with some of the cast holding out “Friends”-style for a $1 Million per episode each) quite a few people have been introduced to the concept by this episode.

Dr. Stanley Cohen, who was Einstein’s driver during the 1940s, said that the reason that he never came around to the idea of quantum entanglement – which he derided as “spooky action at a distance” – was that he was such an intuitive thinker that he had a hard time accepting as true such a counter-intuitive notion. But since the facts of QM weirdness are by now so well established, mostly people just spend time arguing about how to make sense of what happened, not the reality of what happens. Many go with the Copenhagen interpretation, which basically says “Don’t ask if the cat is alive or dead before you open the box – that’s not  a meaningful question,” while others are proponents of the many worlds interpretation, the idea that all possibilities happened somewhere in the multiverse. The problem is that if there is no practical difference between interpretations, there is no way to decide between them experimentally. What is exciting about these new results, is that the implications of weak measurements and post selection are not totally clear yet. This means that physicists can go back to what they love doing – testing theories based on empirical evidence and winnowing out the bad ones – instead of philosophizing about the right way to “frame” undisputed phenomena.

The idea of weak measurement is to circumvent the Heisenberg uncertainty principle, which places fundamental limits on how precisely one can know pairs of values (like position and momentum) at once. Every measurement disturbs the system being measured, but by making many weak measurements that don’t mess the system significantly, you can learn more about what is going on that would normally be allowed by Heisenberg. Some say that weak measurements don’t “collapse the wave-function” like a regular measurement would, although this assumes that we understand what wave-function collapse is, which we still don’t. Using weak measurements, scientists were able to follow the “Cheshire Cat” and its “smile” separately. Part of implementing weak measurements is the concept of “post-selection.” This is also a relatively newly appreciated part of quantum mechanics, and arises from the time-symmetry between choosing the initial quantum state and final state you want to measure. This starts to erode even our concept of causality, including our insistence that causes must occur earlier in time than their effects. In actuality, there are situations in which this does not appear to be true! So while our understanding of the Universe continues to improve, many mind-blowing discoveries keep coming up.

Seeding an idea

Here is a great video of some neat parlor tricks possible with supercooled water, a strange state in which pure water remains liquid after being cooled below zero Centigrade… until something triggers the sudden nucleation of solid ice:

There is a sinister side to nucleation, too. But first, let’s explain what is going on in the video. Water cooled below the freezing point is in a meta-stable state, in the sense that it could minimize its free energy by crystallizing, but is so pure that the system is “stuck” until something gets the freezing started. Of course, once it gets going, the process is self-sustaining and won’t stop until the entire thing is frozen. In the video, supercooled water can use either a seed crystal (regular ice), or just a sharp shock to start freezing. Usually, some dust or other impurity is enough to nucleate freezing, so it takes a lot of preparation to successfully pull of the tricks shown.

This reminded me of the novel Cat’s Cradle, in which Kurt Vonnegut does a fantastic job of explaining how the (fictional, luckily for us) super-stable allotrope of water called ice-nine could be really dangerous.

“…Now think about cannonballs on a courthouse lawn or about oranges in a crate again,” he suggested. And he helped me to see that the pattern of the bottom layers of cannonballs or of oranges determined how each subsequent layer would stack and lock. “The bottom layer is the seed of how every cannonball or every orange that comes after is going to behave, even to an infinite number of cannonballs or oranges. Now suppose,” chortled Dr. Breed, enjoying himself, “that there were many possible ways in which water could crystallize, could freeze. Suppose that the sort of ice we skate upon and put into highballs–what we might call ice-one– is only one of several types of ice. Suppose water always froze as ice-one on Earth because it had never had a seed to teach it how to form ice-two, ice- three, ice-four…? And suppose,” he rapped on his desk with his old hand again, “that there were one form, which we will call ice-nine–a crystal as hard as this desk–with a melting point of, let us say, one-hundred degrees Fahrenheit, or, better still, a melting point of one-hundred-and-thirty degrees.”
 

The  problem of course, is the liquid we call “room-temperature water” is supercooled with respect to ice-nine, so it only need a single seed-crystal to take over the system.

In a less apocalyptic application, supersaturated solutions, like sucrose dissolved in water, can be used for more delicious purposes, including making rock candy.

The trick to getting large, delicious crystals of sugar instead of a glob of sticky mess is to start with a solution that is supersaturated. This is easily done by starting with hot water and dumping in a lot of sugar. As the water cools, the ability of the solution to dissolve the sugar decreases, until it starts to “precipitate.” By dropping in a seed crystal to get things starting, sugar molecules keep adding themselves to the existing crystal, making it bigger and bigger until it is good enough for your next party.

In the book Stuff Matters, it is explained that getting chocolate to have the remarkable and tempting properties that it does – staying solid at room temperature, looking smooth and shiny when you pull it from its wrapper, giving off a satisfying “snap” when you break it, and most importantly, melting in you mouth to deploy a payload of sugar and cocoa solids – requires a great deal of engineering to make sure that the chocolate crystallizes correctly. This requires proper nucleation of the right structure.

Chocolate.jpg

 

Mathematically, nucleation happens because the total free energy of a crystal depends on its size. There is a balance between the force of surface tension, which discourages the formation and growth of crystals, and the bond that could be made between the solute molecules, which encourages more precipitation. Very small crystals are unstable because of the large surface tension-  they have a lot of surface area and not a lot of volume to have bonds to compensate. In contrast, large crystals have many bonds and a lower surface-area-to-volume ratio. Therefore, new molecules will add themselves to the crystal, making it bigger and even more attractive to other floating molecules. So once a crystal is nucleated, it will grow like crazy. Here, we have a great example of a “bi-stable” system, free-floating molecules vs a crystal, separated by a critical crystal size.

The potentially scary part is that, even without a seed crystal, there is a chance that a cluster will defy the odds and grow large enough to nucleate the entire system to precipitate.

 

This inherent uncertainty, that the dynamical, nonlinear system can spontaneously and permanently switch states has serious consequences. In some cases, a misfolded protein, called a “prion“, is the actual contagious agent of disease. CJD (the human version of Mad-Cow disease) can start with just one rogue protein that serves as a template to recruit healthy proteins into prions themselves. So even though not a living pathogen, a single prion can multiply like a virus.  Based on current knowledge, other protein aggregation diseases, including Alzheimer’s and Amyloidosis might also be thought of a “nucleation diseases,” in the sense that a single amyloid plaque might be formed from normal proteins based just on random fluctuations, with an expected waiting time that depends on the concentration of susceptible proteins.  So understanding the physics behind nucleation can help us better control these processes, but not remove all of the randomness.

Does the best team win?

Today is the World Cup final. Soccer is probably the sport in which the underdog has the best chance to upset a better team, as ruefully observed by many sports commentators in the US after coming so very close to beating a much more skilled Belgium side.

In general, we can model the probability that the “better team” wins using some very simple assumptions (and neglecting ties):

(1) If the teams are equally good, there is a 50/50 chance for each to win

(2) As the difference in quality gets bigger, the probability of the better team winning approaches 1

That’s it! It’s had to argue with either axiom. So the chance that a team will win almost certainly looks like a logistic curve:

 

The Elo chess ranking system is based on this reasoning. But which logistic curve should we use? One that is almost a straight line – meaning that there is a more or less linear correspondence between the quality of teams and the chance of winning – or something more like a step function, in which the better team, even if only slightly better, will virtually always win. Here, different sports may get different answers. In a a very interesting paper, the author argues that the only real difference between sports is how often scoring events happen. It turns out, to a pretty good approximation, scoring events can be modeled as a Poisson process, which means that they occur independently of each other with a fixed average rate (so forget about momentum in sports!) with better teams scoring at a faster rate.

 

So waiting for a goal to be scored in soccer is a lot like waiting for unstable radioactive nucleus to decay. You know on average about how long you have to wait, but have no idea when this one will go off. (Also, having waited a while does not make it more likely to happen with this kind of distribution.) If it is true that scoring in sports in a Poisson process, then the law of large numbers tell us that events with high scoring rates, like basketball, favor the better team, since lucky deviations have a chance to even themselves out. Contrast this with soccer, in which one mistake, or bad penalty given, can easily swing a match. One solution, of course, is to have a multigame series – as in hockey, baseball, or basketball – which amplifies the advantage for the better team. It is interesting to note that in the NBA, one team, the Miami Heat, has made it to the finals four years in a row [although that streak is in serious jeopardy] and that this year’s finals were a rematch of last year’s. In fact, very long dynasties have occurs in basketball, like the eight consecutive championships from 1959 to 1966 by the Celtics. Basketball favors the better team for two reasons – very high scoring, and long playoff series. In contrast, baseball is much more subject to the rule of chance with shorter playoffs in the early rounds, and low scoring. And soccer, with ultralow scoring and single elimination knockout round, is the best for underdogs.

Evolution at Work

There were a couple of great stories in the news recently that illustrate how evolution works in practice. The first is a study of the evolution of electrocytes in electric eels.

The genome of the electric eel (pictured) has been sequenced for the first time. The results of a new study indicate that, despite millions of years of evolution, independent lineages of eel developed electric organs in a similar way. Worldwide, there are hundreds of species of electric fish, in six broad lineages

Here are some aspects highlighted by the findings:

*Evolution is not as “random” as you might think

Based on the DNA sequence data, it appears that electrocytes evolved independently at least six times. This convergent evolution show that there are “attractors” in the space of all possible creatures, and the development of species, while based on random mutations, is not completely random.

*Evolution works with what is available

In these species, muscle cells, which already use ion channels to create electric signals to control contractions, were re-purposed into specialized electrocytes. Instead of starting from scratch, evolution tinkers with what already works and finds new ways to use it.

*Evolution knows more physics than you

All organisms need to follow the laws of physics, but evolution finds ways to take advantage of this rules. In electronic devices, stacking batteries in series causes their voltages to be added. In electric eels, stacks of electrocytes can reach 600 volts.

*Gradual improvement is possible

Although not discussed at length in this article, scientists think that electrocytes originality evolved to help eels navigate in murky water. This would not require large voltages, so it would still be advantageous compared with eels without this ability. Later, the voltages could be increased to incapacitate prey.

 

The other article sheds light on the ability of people in Tibet to live at such high altitudes:

       

“Past research has concluded that a particular gene helps people live in the thin air of the Tibetan plateau. Now scientists report that the Tibetan version of that gene is found in DNA from Denisovans, a poorly understood human relative more closely related to Neanderthals than modern people.” 

Most people have a variant of the gene that responds to the thin air at high altitudes by increasing the production of red blood cells. This causes health problems due to “thick blood.” In contrast, Tibetans have a version of the gene that does not make so many red blood cells.

 

*There are many ways for genes to get around

It is clear that the ancestors of modern humans mated with Denisovans (and Neanderthals) before presumably driving them to extinction. The boundary between species is not as stark as we usually think.

*Gene variants can be rare, until there are not

In all likelihood, this gene variant conferred little to no advantage to Denisovans. In fact, for people not living in the mountains, it was probably slightly detrimental. Therefore, was probably rare in humans until some decided to start living in the Himalayan region. At that point, being able to withstand the effects of the high altitude was a huge bonus.  Then, this gene variant spread widely. Again, evolution works with what is already lying around.