In a small bit of irony, the word “quantum” means a indivisible bundle of some quantity that can only take on discrete values, but quantum mechanics sometimes predicts that some variables are definitely not quantized. The most famous example is Schrodinger’s Cat, for which the “aliveness” of the cat is not restricted to yes or no, but allowed to take on any amplitude in between. Recently, progress has been made in our effort to peek inside the box: Make sure you don’t look to closely. That is, the quantum weirdness that would be disrupted by a normal measurement – via the collapse of the wavefunction – remains if only a weak measurement is made. So there can even be degrees of “making a measurement,” as opposed to a sharp distinction between quantum and “classical” situations.
The blurring of quantum and “normal” is further evident in quantum biology. The remarkable efficiency of photosynthetic plants requires that the proteins that collect the energy from photons in sunlight employ quantum effects. However, it is not always immediately obvious when an effect is totally quantum in nature and not explainable using classical models. For example, quantum beating is observed in photosynthesis, but may have a non-QM origin. The smoking gun signature of quantumness is a negative probability – when the wavefunctions of two particles undergo destructive interference, so the presence of one actually suppresses the apperance of the other (as in the two-slit experiment, in which opening the second slit leads to fewer counts in certain spots). In a new paper, the degree of quantum weirdness is actually measured by observing how often particle occurrences fall below their expected probabilities. Destructive interference is useful in funneling energetic electrons, since the total probability of finding them somewhere is equal to 1, so cancelling the possibility of turning up in the wrong places insures they stay on track. This is similar to the light is controlled in antiglare coating using wave interference.
The boundary between simplicity and chaos is much narrower than we’d usually like to believe. Teaching first year physics provides many opportunistic to discuss basic situations that are completely “solved,” in the sense that simple, deterministic laws allow us, at least in idealized cases, to perfectly and unequivocally describe all of the dynamics. For example, a simple pendulum, or an planet orbiting its sun. But change the situation just a tiny bit, and all bets are off. For example, the simple pendulum with completely defined motion becomes chaotic when a few repelling magnets are added:
Or a single positive charge in the field of three negative charges:
What makes this so confounding is that there is no element of “chance” in the strict sense. Everything is still bound by the same deterministic laws. But a tiny change in the initial conditions is enough to create huge, unpredictable changes in the outcome. However, according to Laplace, this is the only kind of chance. That is, if someone rolling a pair of dice could know all of the initial conditions perfectly and be able to calculate the equations of motion rapidly in his head, he would always know what number would turn up each throw. But since even tiny effects are enough to tip the dice from one “final equilibrium” to another, this is impossible even in principle.
This fractal represents the application of Newton’s Method for finding the solution to simple algebraic equations. If more than one solution exists, the method will settle on one of them, depending on where you start looking. You can color the basins of attraction that represent the solution you will eventually land on given each starting condition. Amazingly, the boundary between regions is infinitely complex, meaning you can zoom in as much as you want and still find four solutions sitting adjacent to each other at every interface.
Maybe this explains all of the “soft sciences,” like psychology and economics. Once the system becomes large enough, simple rules give rise to behavior so complex that we cannot rely on formulas as in physics.
Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science. – Henri Poincare
The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth”.
All models are wrong, but some are useful.
– George E. P. Box
One of the most controversial claims of sports statistics is the hot hand “fallacy”. Like the gambler’s fallacy, apparent hot or cold streaks are perfectly explainable by random chance and our overactive pattern detection sense. New researchseems to show that the scoring of teams in college football, pro football, hockey, and basketball follow a Poisson distribution. Among the fundamental assumptions of this distribution is that there is some expected number of occurances per time periond, but that each occurring is independent of any other. That is, scoring a goal now doesn’t make it any more or less likely in the next 5 minutes.
However, there is a difference depending on the lead. The article says that “While hockey and football teams tend to extend their leads, pro basketball squads play worse when they’re ahead.” So getting an early lead is very important in hockey and football (or drinking games with Janx spirits), since there is an “unstable equilibrium” that exhibits the Matthew effect. Teams that are behind have to take more chances (higher variance strategies) that are more likely to backfire. Specifically, football teams that are trailing need to pass more, and hockey teams send more defensemen into attacking mode. Conversely, hockey teams leading in the third period usually take many fewer shots on goal. Basketball seems to be an interesting exception to this rule and instead has a “restoring force” that makes it more likely for the trailing team to close the gap.