Authors: Mario Livio
At a somewhat deeper stratum, Kelvin’s blunder probably stemmed from a well-recognized psychological trait: The more committed we are to a certain opinion, the less likely we are to relinquish it, even if confronted with massive contradictory evidence. (Does the phrase “weapons of mass destruction” ring a bell?)
The theory of
cognitive dissonance
, originally developed by psychologist Leon Festinger, deals precisely with those feelings of discomfort that people experience when presented with information that is inconsistent with their beliefs. Multiple studies show that to relieve cognitive dissonance, in many cases, instead of acknowledging an error in judgment, people tend to reformulate their views in a new way that justifies their old opinions.
The messianic stream within the Jewish Hasidic movement
known as Chabad provided an excellent, if esoteric, example for this process of reorientation. The belief that the Chabad leader Rabbi Menachem Mendel Schneerson was the Jewish Messiah picked up momentum during the decade preceding the rabbi’s death in 1994. After the Rabbi suffered a stroke in 1992, many faithful followers in the Chabad movement were convinced that he would not die but would “emerge” as the Messiah. Faced with the shock of his eventual death, however, dozens of these followers changed their attitudes and argued (even during the funeral) that his death was, in fact, a
required
part of the process of his returning as the Messiah.
An experiment conducted in 1955 by psychologist Jack Brehm, then at the University of Minnesota, demonstrated a different manifestation of cognitive dissonance. In that study, 225 female sophomore students (the classical subjects of experiments in psychology) were first asked to rank eight manufactured articles as to their desirability on a scale of 1.0 (“not at all desirable”) to 8.0 (“extremely desirable”). In the second stage, the students were allowed to choose as a take-home gift one of two articles presented to then from the original eight. A second round of rating all eight items then followed. The study showed that in the second round, the students tended to increase their ratings for the article they had chosen and to lower them for the rejected item. These and other similar findings support that idea that our minds attempt to reduce the dissonance between the cognition “I chose item number three” and the cognition “But item number seven also has some attractive features.” Put differently, things seem better after we choose them; a conclusion corroborated further by neuroimaging studies that show enhanced activity in the caudate nucleus, a region of the brain implicated with “feeling good.”
Kelvin’s case appears to fit the cognitive dissonance theory like a glove. After having repeated the arguments about the age of the Earth for more than three decades, Kelvin was not likely to change his opinion just because someone suggested the
possibility
of convection. Note that Perry was not able to prove that convection was taking place, nor was he even able to show that convection was
probable. By the time radioactivity appeared on the scene another decade later, Kelvin was probably even less inclined to publish a concession of defeat. Instead, he preferred to engage in an elaborate scheme of experiments and explanations intended to demonstrate that his old estimates still held true.
Why is it so difficult to let go of opinions, even in the face of contradictory evidence that any independent observer would regard as convincing? The answer can perhaps be found in the way the reward circuitry of the brain operates.
Already in the 1950s, researchers James Olds and Peter Milner of McGill University identified pleasure centers in the brains of rats. Rats were found to press the lever that activated the electrodes placed at these pleasure-inducing locations more than six thousand times per hour! The potency of this pleasure-producing stimulation was illustrated dramatically in the mid-1960s, when experiments showed that when forced to choose between obtaining food and water or the rewarding pleasure stimulation, rats suffered self-imposed starvation.
Neuroscientists of the past two decades have developed sophisticated imaging techniques that allow them to see in detail which parts of the human brain light up in response to pleasing tastes, music, sex, or winning at gambling. The most commonly used techniques are positron-emission tomography (PET) scans, in which radioactive tracers are injected and then followed in the brain, and functional MRI (fMRI), which monitors the flow of blood to active neurons.
Studies showed that an important part of the reward circuitry is a collection of nerve cells that originate near the base of the brain (in an area known as the ventral tegmental area, or VTA) and communicate to the nucleus accumbens—an area beneath the frontal cortex. The VTA neurons communicate with the nucleus accumbens neurons by dispatching a particular chemical neurotransmitter called dopamine. Other brain areas provide the emotional contents and relate the experience to memories, and to the triggering of responses. The hippocampus, for instance, effectively “takes notes,” while the amygdala “grades” the pleasure involved.
So how does all of this relate to intellectual endeavors? To embark on, and persist in, some relatively long-term thought process, the brain needs at least some promise of pleasure along the way. Whether it is the Nobel Prize, the envy of neighbors, a salary raise, or the mere satisfaction of completing a Sudoku puzzle labeled “evil,” the nucleus accumbens of our brain needs some dose of reward to keep going. However, if the brain derives frequent rewards over an extended period of time, then just as in the case of those self-starving rats, or with people who are addicted to drugs, the neural pathways connecting the mental activity to the feeling of accomplishment become gradually adapted. In the case of drug addicts, they need more drugs to get the same effect. For intellectual activities, this may result in an enhanced need for being right all the time and, concomitantly, in an increasing difficulty to admit errors.
Neuroscientist and author Robert Burton suggested specifically that the insistence upon being right might have physiological similarities to other addictions. If true, then Kelvin would no doubt match the profile of an addict to the sensation of being certain. Almost a half century of what he surely regarded as victorious battles with the geologists would have strengthened his convictions to the point where those neural links could not be dissolved. Irrespective, however, of whether the sensation of being certain is addictive or not, fMRI studies have shown that what is known as motivated reasoning—when the brain converges on judgments that maximize positive affect states associated with attaining motives—
is not associated with neural activity linked to cold reasoning tasks. In other words, motivated reasoning is regulated by emotions, not by dispassionate analysis, and its goal is to minimize threats to the self. It is not inconceivable that late in life, Kelvin’s “emotional mind” occasionally swamped his “rational mind.”
You may recall that earlier I referred to Kelvin’s calculation of the age of the Sun. I do not consider his estimate to be a blunder. How is that possible? After all, his estimate of less than one hundred million years was wrong by as much as his value for the age of the Earth.
Fusion
In an article on the age of the Earth written in 1893, three years before the discovery of radioactivity, the American geologist Clarence King wrote,
“The concordance of results between the ages of the sun and earth certainly strengthens the physical case and throws the burden of proof upon those who hold to the vaguely vast age derived from sedimentary geology.” King’s point was well taken. As long as the age of the Sun was estimated to be only a few tens of millions of years, any age estimates based on sedimentation would have been constrained, since for sedimentation to occur, the Earth had to be warmed by the Sun.
Recall that Kelvin’s calculation of the age of the Sun relied entirely on the release of gravitational energy in the form of heat as the Sun contracts. This idea—that gravitational energy could be the source of the Sun’s power—originated with the Scottish physicist John James Waterston as early as 1845. Ignored initially, the hypothesis was revived by Hermann von Helmholtz in 1854, and then enthusiastically endorsed and popularized by Kelvin. With the discovery of radioactivity, many assumed that the radioactive release of heat would turn out to be the real source of the Sun’s power. This, however, proved to be incorrect. Even under the wild assumption that the Sun is composed largely of uranium and its radioactive decay products, the power generated would not have matched the observed solar luminosity (as long as chain reactions not known at Kelvin’s time were not included). Kelvin’s estimate of the age of the Sun had served to strengthen
his objection to revising his calculation of the age of the Earth—as long as the problem of the age of the Sun existed, the discrepancy with the geological guesstimates would not have been resolved fully. The answer to the question of the Sun’s age came only a few decades later.
In August 1920, astrophysicist Arthur Eddington suggested that the
fusion
of hydrogen nuclei to form helium might provide the energy source of the Sun. Building on this concept, physicists Hans Bethe and Carl Friedrich von Weizsäcker
analyzed a variety of nuclear reactions to explore the viability of this hypothesis. Finally, in the 1940s, astrophysicist Fred Hoyle (whose groundbreaking work we shall investigate in chapter 8) proposed that fusion reactions in stellar cores could synthesize the nuclei between carbon and iron. As I noted in the previous chapter, Kelvin was therefore right when he declared in 1862: “As for the future, we may say, with equal certainty, that inhabitants of the earth can not continue to enjoy the light and heat [of the Sun] essential to their life for many million years longer
unless sources now unknown to us are prepared in the great storehouse of creation
[
emphasis added
].” The solution to the problem of the age of the Sun required no less than the combined genius of Einstein, who showed that mass could be converted into energy, and the leading astrophysicists of the twentieth century, who identified the nuclear fusion reactions that could lead to such a conversion.
In spite of the fact that Kelvin’s calculation of the age of the Earth was a blunder, I continue to regard it as absolutely brilliant. Kelvin had completely transformed geochronology from vague speculation into an actual science, based on the laws of physics. His pioneering work opened a vital dialogue between geologists and physicists—an exchange that continued until the discrepancy was resolved. At the same time, Kelvin’s parallel work on the age of the Sun pointed clearly to the need to identify new sources of energy.
Charles Darwin himself was very aware of the importance of eliminating the obstacle to his theory presented by Kelvin’s calculations. In his final revision of
The Origin,
Darwin wrote:
With respect to the lapse of time not having been sufficient since our planet was consolidated for the assumed amount of organic change, and this objection, as urged by Sir William Thomson [Kelvin], is probably one of the gravest as yet advanced, I can only say, firstly, that we do not know at what rate species change as measured by years, and secondly, that many philosophers are not as yet willing
to admit that we know enough of the constitution of the universe and of the interior of our globe to speculate with safety on its past duration.
Darwin did not live to see how Perry’s idea of a convective Earth, the discovery of radioactivity, and the understanding of nuclear fusion reactions in stellar interiors swept away all of Kelvin’s age limits. The fact remains, however, that it was Kelvin’s calculation—erroneous though it was—that identified the problem that had to be solved.
From our perspective as humans, one of the key benefits of the Earth having enjoyed 4.5 billion years of energy from the Sun has been the emergence of complex life on Earth. But the building blocks of all life-forms are cells, and by the 1880s, scientists using ever-improving optics to examine the internal structure of cells coined the term “chromosome” for the stringy bodies found in the cell’s nucleus. Soon thereafter, Mendel’s work on genes (“factors,” as he called them) was rediscovered, and pioneering work by Thomas Hunt Morgan and his students at Columbia University allowed for mapping out the positions of genes along chromosomes. In 1944 a particular molecule—DNA—located on chromosomes, started to gain attention. Before long, biologists realized that all cells receive their instructions not from proteins but from two molecules, DNA and RNA nucleic acids. Biologists identified the DNA molecules as the bosses of all the frenzied activity in cells and the molecules that know how to make identical copies of themselves. RNA (ribonucleic acid) molecules were shown to be in charge of transmitting the instructions issued by DNA molecules to the rest of the cell. Together, these molecules contain all the information needed to make an apple tree, a snake, a woman, or a man function. The discoveries of the molecular structures of proteins and of DNA are two of the most fascinating stories in the search for the origin and workings of life. Yet these discoveries, too, involved major blunders.
INTERPRETER OF LIFE