Brain Rules for Baby (13 page)

Read Brain Rules for Baby Online

Authors: John Medina

BOOK: Brain Rules for Baby
6.33Mb size Format: txt, pdf, ePub
What a smart brain looks like
If you could peer inside your baby’s brain, would there be clues to her future intellectual greatness? What does intelligence look like in the twists and folds of the brain’s convoluted architecture? One obvious, if ghoulish, way to answer these questions is to look at the brains of smart people after they have died and seek clues to intelligence in their neural architecture. Scientists have done this with a variety of famous brains, from the mathematical German Carl Gauss to the not-so-mathematical Russian Vladimir Lenin. They’ve studied Albert Einstein’s brain, too, with surprising results.
 
Just your average genius
Einstein died in New Jersey in 1955. His autopsy was performed by Thomas Stoltz Harvey, who must go down as the most possessive pathologist in history. He excised the famous physicist’s brain and photographed it from many angles. Then he chopped the brain
into tiny blocks. Then he got into trouble. Harvey apparently did not secure permission from Einstein or his family to pixellate the physicist’s famous brain. Princeton Hospital administrators demanded that Harvey surrender Einstein’s brain. Harvey refused, forfeited his job, fled to Kansas, and held the preserved samples for more than 20 years.
They were not rediscovered until 1978, when journalist Steven Levy tracked down Harvey. Einstein’s cerebral bits were still available, floating in large mason jars filled with alcohol. Levy persuaded Harvey to give them up. Other scientists began to study them in detail for clues that would reveal Einstein’s genius.
What did they discover? The most surprising finding was that there was nothing surprising. Einstein had a fairly average brain. The organ had a standard internal architecture, with a few structural anomalies. The regions responsible for visuospatial cognition and math processing were a bit larger (15 percent fatter than average). He was also missing some sections that less agile brains possess, accompanied with a few more glial cells than most people carry (glial cells help give the brain its structure and assist with information processing). None of these results are very instructive, unfortunately. Most brains possess structural abnormalities, some regions more shrunken than others, some more swollen. Because of this individuality, it is currently impossible to demonstrate that certain physical differences in brain structure lead to genius. Einstein’s brain certainly was smart, but not one of its dice-sized pieces would definitively tell us why.
What about looking at live, functioning brains? These days you don’t have to wait until someone is dead to determine structure-function relationships. You can use noninvasive imaging technologies to look in on the brain while it is performing some task. Can we detect smartness in people by observing an organ caught in the act of being itself? The answer, once again, is no. Or at least not yet. When you examine living geniuses solving some tough problem,
you do not find reassuring similarities. You find disconcerting individualities. Problem solving and sensory processing do not look the same in
any
two brains. This has led to great confusion and contradictory findings. Some studies purport to show that “smart” people have more efficient brains (they use less energy to solve tough problems), but other researchers have found exactly the opposite. Gray matter is thicker in some smart people, white matter thicker in others. Scientists have found 14 separate regions responsible for various aspects of human intelligence, sprinkled throughout the brain like cognitive pixie dust. These magical regions are nestled into an idea called P-FIT, short for Parietal-Frontal Integration Theory. When P-FIT regions are examined as people think deep thoughts, researchers again find frustrating results: Different people use varying combinations of these regions to solve complex problems. These combinations probably explain the wide variety of intellectual abilities we can observe in people. Overarching patterns are few.
We have even less information about a baby’s intelligence. It is very difficult to do noninvasive imaging experiments with the diaper-and-pull-up crowd. To do a functional MRI, for example, the head needs to stay perfectly still for long stretches of time. Good luck trying to do that with a wiggly 6-month-old! Even if you could, given our current understanding, brain architecture cannot successfully predict whether or not your child is going to be smart.
 
In search of a ‘smart gene’
How about at the level of DNA? Have researchers uncovered a “smart gene”? A lot of people are looking. Variants of one famous gene, called COMT (catechol-O-methyl transferase, since you asked), appear to be associated with higher short-term-memory scores in some people, though not in others. Another gene, cathepsin D, also was linked to high intelligence. So was a variant of a dopamine receptor gene, from a family of genes usually involved in
feeling pleasure. The problem with most of these findings is that they have been difficult to replicate. Even when they have been successfully confirmed, the presence of the variant usually accounted for a boost of only 3 or 4 IQ points. To date, no intelligence gene has been isolated. Given the complexity of intelligence, I highly doubt there is one.
 
Bingo: A baby IQ test
If cells and genes aren’t any help, what about behaviors? Here, researchers have struck gold. We now have in hand a series of tests for infants that can predict their IQs as adults. In one test, preverbal infants are allowed to feel an object hidden from their view (it’s in a box). If the infants can then correctly identify the object by sight—called cross-modal transfer—they will score higher on later IQ tests than infants who can’t. In another test, measuring something researchers call visual recognition memory, infants are set in front of a checkerboard square. This is an oversimplification, but the longer they stare, the higher their IQ is likely to be. Sound unlikely? These measurements, taken between 2 and 8 months of age, correctly predicted IQ scores at age 18!
What does that really mean? For one thing, it means that when these children reach school age, they will do well on an IQ test.
The intelligence of IQ
IQ matters a lot to some people, such as the admissions officers of elite private kindergarten and elementary schools. They often demand that children take intelligence tests; the WISC-IV, short for Wechsler Intelligence Scale for Children, version IV, is common. Many schools accept only those kids who score in the ridiculously high 97th percentile. These $500 tests are sometimes administered to 6-year-olds, or kids even younger, serving as an entrance exam to kindergarten! Here are two typical questions on IQ tests:
1. Which one of the five is least like the other four? Cow, tiger, snake, bear, dog.
Did you say snake? Congratulations. The testers who designed the question agree with you (all the other animals have legs; all the others are mammals).
2. Take 1000 and add 0 to it. Now add another 1000. Now add 30. And another 1000. Now add 20. Now add another 1000. Now add 10. What is the total?
Did you say 5,000? If so, you’re in good company. Research shows that 98 percent of people who tackle this question get that answer. But it is wrong. The correct answer is 4,100.
IQ tests are filled with questions like these. If you get them right, does that mean you are smart? Maybe. But maybe not. Some researchers believe IQ tests measure nothing more than your ability to take IQ tests. The fact is, researchers don’t agree on
what
an IQ test measures. Given the range of intellectual abilities that exist, it is probably smart to reject a one-number-fits-all notion as the final word on your baby’s brain power. Armed with a little history on these inventories, you can decide for yourself.
 
The birth of the IQ test
Many sharp folks have investigated the definition of human intelligence, often in an attempt to figure out their own unique gifts. One of the first was Francis Galton (1822-1911), half cousin to Charles Darwin. Possessed with enormous and fashionable pork-chop side-burns but otherwise balding, Sir Francis was stern, brilliant, and probably a little crazy. He came from a famous line of pacifist Quakers whose family business was, oddly enough, the manufacture of guns. Galton was a prodigy, reading and quoting Shakespeare by the time he was 6, speaking both Greek and Latin at an early age. He seemed
to be interested in everything, as an adult making contributions to meteorology, psychology, photography, and even criminal justice (he advocated for the scientific analysis of fingerprints to identify criminals). Along the way, he invented the statistical concepts of standard deviation and linear regression, and he used them to study human behavior.
One of his chief fascinations concerned the engines that power human intellect—especially inheritance. Galton was the first to realize that intelligence both had heritable characteristics and was powerfully influenced by environment. He’s the one who coined the phrase “nature versus nurture”. Because of these insights, Galton is probably the man most responsible for inspiring scientists to consider the definable roots of human intelligence. But as researchers began to investigate the matter systematically, they developed a curious compulsion to describe human intelligence with a single number. Tests were used—and are still used today—to yield such numbers. The first one is our oft-mentioned IQ test, short for intelligence quotient.
IQ tests originally were designed by a group of French psychologists, among them Alfred Binet, innocently attempting to identify mentally challenged children who needed help in school. The group devised 30 tasks that ranged from touching one’s nose to drawing patterns from memory. The design of these tests had very little empirical support in the real world, and Binet consistently warned against interpreting these tests literally. He felt presciently that intelligence was quite plastic and that his tests had real margins for errors. But German psychologist William Stern began using the tests to measure children’s intelligence, quantifying the scores with the term “IQ.” The score was the ratio of a child’s mental age to his or her chronological age, multiplied by 100. So, a 10-year-old who could solve problems normally solved only by 15-year-olds had an IQ of 150: (15/10) x 100. The tests became very popular in Europe, then floated across the Atlantic.
In 1916, Stanford professor Lewis Terman removed some of the questions and added new ones—also without many empirical reasons to do so. The configuration has been christened the Stanford-Binet test ever since. Eventually, the ratio was changed to a number distributed along a bell curve, setting the average to 100. A second test, developed in 1923 by British Army officer-turned-psychologist Charles Spearman, measured what he called “general cognition, now simply referred to as “g”. Spearman observed that people who scored above average on one subcategory of pencil-and-paper tests tended to do well on the rest of them. This test measures the tendency of performance on a large number of cognitive tasks to be intercorrelated.
Battles have been raging for decades about what these test scores mean and how they should be used. That’s a good thing, because intelligence measures are more plastic than many people realize.
 
Gaining and losing a pound of IQ
I remember the first time I saw the actress Kirstie Alley on screen, playing a smart, sexy character in a
Star Trek
movie. A former cheerleader, Kirstie went on to star in a number of television shows, including the role for which she won two Emmys, the legendary sitcom
Cheers
. But she may be better known for her issues with weight. In 2005, Kirstie reportedly weighed more than 200 pounds, mostly because of poor eating habits. She became a spokesperson for a weight-loss program and at one point starred in a television show about an overweight actress attempting to get work in Hollywood. She eventually lost 75 pounds. Since then, however, her weight has continued to fluctuate.
What does this unstable number have to do with our discussion of intelligence? Like Kirstie’s dress size, IQ is malleable. IQ has been shown to vary over one’s life span, and it is surprisingly vulnerable to environmental influences. It can change if one is stressed, old, or living in a different culture from the testing majority. A child’s IQ is
influenced by his or her family, too. Growing up in the same household tends to increase IQ similarities between siblings, for example. Poor people tend to have significantly lower IQs than rich people. And if you are below a certain income level, economic factors will have a much greater influence on your child’s IQ than if your child is middle class. A child born in poverty but adopted into a middle-class family will on average gain 12 to 18 points in IQ.
You’d be smart to reject a one-number-fits-all notion as the final word on your baby’s brain power.
There are people who don’t want to believe IQ is so malleable. They think numbers like IQ and “g” are permanent, like a date of birth instead of a dress size. The media often cast our intellectual prowess in such permanent terms, and our own experience seems to agree. Some people are born smart, like Theodore Roosevelt, and some people are not. The assumption is reassuringly simplistic. But intelligence isn’t simple, nor is our ability to measure it.
 
Smarter with the years
One damning piece of evidence is the fact that somehow IQs have been increasing for decades. From 1947 to 2002, the collective IQ of American kids went up 18 points. James Flynn, a crusty, wild-haired old philosopher from New Zealand, discovered this phenomenon (a controversial finding cheerfully christened the “Flynn Effect”). He set up the following thought experiment. He took the average American IQ of 100, then ran the numbers backward from the year 2009 at the observed rate. He found the average IQ of Americans by 1900 would have been between 50 and 70. This is the same score of most people with Down syndrome, a classification termed “mild mental retardation”. Most of our citizens at the turn of the century did not have Down syndrome. So is there something wrong with the
people or something wrong with the metric? Clearly, the notion of IQ permanence needs some retooling.

Other books

The Zombies Of Lake Woebegotten by Geillor, Harrison
To Marry The Duke by Julianne Maclean
Cry Wolf by Aurelia T. Evans
Facts of Life by Gary Soto