The Dumbest Generation (13 page)

Read The Dumbest Generation Online

Authors: Mark Bauerlein

BOOK: The Dumbest Generation
12.42Mb size Format: txt, pdf, ePub
 
 
With so much money supporting digital research, and so many students going online hourly, the faith in young people’s digital intelligence reaches well into the academic sphere, and the academic press has made the students’ digital mentality into a topic of the day. Here’s a paragraph from “The Net Generation in the Classroom,” a 2005 story in the
Chronicle of Higher Education
that explores the prospect of aligning college teaching with the entering classes’ e-mindset.
 
 
Born between roughly 1980 and 1994, the Millennials have already been pegged and defined by academics, trend spotters, and futurists: They are smart but impatient. They expect results immediately. They carry an arsenal of electronic devices—the more portable the better. Raised amid a barrage of information, they are able to juggle a conversation on Instant Messenger, a Web-surfing session, and an iTunes playlist while reading
Twelfth Night
for homework.
 
 
The author, Scott Carlson, adds a skeptical note at the end— “Whether or not they are absorbing the fine points of the play is a matter of debate”—but the main voice in the article, librarian and higher-ed consultant Richard T. Sweeney, has no doubts. Millennials “want to learn,” he observes, “but they want to learn only what they have to learn, and they want to learn it in a different style that is best for them.” The approach grants remarkable discretion to the kids, assuming that they know what “is best,” and that their preferences mark a distinct “learning style.” Another voice in the article, educator and futurist Marc Prensky, likewise insists that young people have turned learning itself into a whole different process. In 2004, Prensky stated in a paper entitled “The Death of Command and Control” that “The unprecedented changes in technology . . . have led to
new patterns of thinking
, especially in young people” (emphasis in original) . Here in the
Chronicle
piece, in casual tones that belie the grave transformation at work, he wonders, “The things that have traditionally been done—you know, reflection and thinking and all that stuff—are in some ways too slow for the future. . . . Is there a way to do those things faster?”
 
 
The fervor spills freely in public discourse as well. When Steven Johnson published
Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter
in early 2005, it sparked across-the-country newspaper reviews, radio spots, and television appearances, including one on
The Daily Show.
The appeal was overt, and as Johnson observes on his Web site, “The title says it all.” As a specimen of contrary mischievousness, an anti-anti-pop culture polemic, the book offered a crisp and witty argument for the good of the mouse and the console, and also for the helpful tutelage of twenty-first-century sitcoms and next-version video games. Everyone with a stake in the rise of screen diversions or who treasured the virtual universe seized upon the thesis, and Johnson unveiled it with élan. To turn a commonsense notion on its head—“TV is good, not bad”—was a winning tactic, and Johnson added an image of himself as defender of a pop culture world disdained by widespread powers as vulgar, violent, and infantile. While Joe Lieberman railed against the “amoral pop-culture threat to public values,” as
Time
magazine put it in 2000 (see Poniewozik), and as Hillary Clinton fretted “about the content [of video games] and what that’s doing to my child’s emotional psychological development,” as she stated at the Kaiser Foundation’s release of the report
Generation M, Everything Bad Is Good for You
turned it all around. In the last 30 years, Johnson insisted, popular culture has become “more complex and intellectually challenging . . . demanding more cognitive engagement . . . making our minds sharper.” Often the content of popular culture is coarse and inane, he conceded, but the formal elements—rules, plotlines, feedback, levels, interactivity—have grown more sophisticated, making today’s games and reality shows into “a kind of cognitive workout” that hones mental gifts. Screen diversions provide something more lasting and effectual than “healthy messages.” They inculcate “intellectual or cognitive virtues,” aptitudes for spatialization, pattern recognition, and problem solving, virtues that reflect twenty-first -century demands better, in fact, than do traditional knowledge and reading skills.
 
 
Once again, the thinking element prevails, and screens are praised for the way they shape the consciousness of users, not pass along to them ideas and values. The case for cognitive benefits begins with a fundamental feature of games: “far more than books or movies or music,” Johnson asserts, “games force you to make decisions.” True, books involve judgment, but they don’t allow readers to determine the next chapter or decide a character’s fate. Game realities, by contrast, let them steer the car, invest money, and grab weapons. The content is juvenile, yes, but “because learning how to think is ultimately about learning how to make the right decisions,” game activity evokes a collateral learning that carries over to users’ real lives. As Malcolm Gladwell noted in his fawning review in
The New Yorker,
“When you play a video game, the value is in how it makes you think.” The images may be flashy and jumbled, the action silly and inconsequential, but when you play the game, Johnson explains, “It’s not about tolerating or aestheticizing chaos; it’s about finding order and meaning in the world, and making decisions that help create that order.”
 
 
The advantages continue with the progressive complexity of television shows.
Hill Street Blues
introduced multiple plotlines and character tensions, Johnson recalls.
The Simpsons
and
Seinfeld
abound with allusions and double entendres, forcing viewers to work harder to “fill in” the context in which they make sense. The pace of their delivery makes the old comedies,
Andy Griffith
et al., seem like slow motion. Reality shows are crass, but they enact “elaborately staged group psychology experiments.” Johnson alludes to
The Apprentice
and says that, compared with
The Price Is Right,
it’s “an intellectual masterpiece.”
 
 
All of them signal an evolution in programming from the linear plots and dull patter of
Good Times
and
Starsky and Hutch,
and the more clever and modish shows activate the minds of those who watch them. When young viewers catch reruns of shows from the sixties, they chuckle at the low-tech action and fidget at the under-stimulating imagery and camera work. Pop culture hasn’t plunged downward into puerile deviancy and artificial violence and general stupidity, Johnson concludes. It has fostered “a race to the top,” and the moralists and doomsayers and elitists with their sky-is-falling pronouncements should cease. Audiences are smarter.
 
 
The claim is bold, and it ultimately rests not upon the structural elements of screen materials, but upon the cognitive outcomes for those who consume them. If we stick to the latter, several objections to Johnson’s breezy applause stand out. For instance, buried in the depths of the Kaiser report
Generation M
is a startling finding about different media use and student achievement. It shows that leisure reading of any kind correlates more closely with a student’s grades than any other media. While eight- to 18-year-olds with high and low grades differed by only one minute in TV time (186 to 187 minutes), they differed in reading time by 17 minutes, 46 to 29— a huge discrepancy in relative terms (a 36 percent drop in leisure reading for kids with low grades), one that suggests that TV doesn’t have nearly the intellectual consequences that reading does. Furthermore, on Johnson’s “multiple plotlines are more sophisticated” criterion, dramas that fail include not only
Dragnet
and
Kung Fu
but also
Oedipus
(Sophocles),
Medea
(Seneca), and
Phédre
(Racine). The complexity he approves lies wholly on the surface—plotlines and verbal play—while other complexities (moral, psychological, and philosophical) go unremarked. Finally, while Johnson neatly divides form from content, the act of decision-making isn’t so distinct from the things decided upon. The content of screen substance—at its worst, juvenile loves and lusts, blood and guts, distortions of historical fact, petty clashes of reality contestants—is more important than he thinks.
 
 
In May 2007, another study appeared showing long-term outcomes for television viewing, and it relied on long-term observations by trained researchers. Published as “Extensive Television Viewing and the Development of Attention and Learning Difficulties During Adolescence” in
Archives of Pediatrics & Adolescent Medicine,
the research tracked children in 678 families in upstate New York at ages 14, 16, and 22 years. The article abstract summarizes:
 
 
Frequent television viewing during adolescence was associated with elevated risk for subsequent attention and learning difficulties after family characteristics and prior cognitive difficulties were controlled. Youths who watched 1 or more hours of television per day at mean age 14 years were at elevated risk for poor homework completion, negative attitudes toward school, poor grades, and long-term academic failure. Youths who watched 3 or more hours of television per day were the most likely to experience these outcomes. In addition, youths who watched 3 or more hours of television per day were at elevated risk for subsequent attention problems and were the least likely to receive postsecondary education. (Johnson et al.)
 
 
Contrast that bleak assessment derived over several years with the evidence-lite enthusiasm of Johnson and other pop culture fans.
 
 
But while Johnson cites few of the knowledge/skill/habit surveys we’ve seen so far, he does point to one significant intellectual trend: average intelligence. In the general fund of mental talent, something remarkable has happened. IQ scores have risen markedly over the century, about three points per decade since before World War II, doing so, Johnson observes, at the same time that popular culture has expanded and evolved. It’s the so-called Flynn Effect, named after New Zealand political scientist James Flynn, who noted the rise (which is masked by the fact that the test is re-normed every few years). In the early 1980s, Flynn surveyed studies in which subjects took IQ tests dating from different years and disclosed the rising pattern. For instance, one experiment recorded a group scoring 103.8 on a 1978 test leaping to 111.3 on a 1953 version of the same test. Successive tests aimed to keep the average at 100 points, but to do so they had to become harder.
 
 
The question is, Why has intelligence jumped? Cognitive psychologists explain the gain variously, noting improved nutrition, better early schooling, smaller families, and wider acquaintance with tests themselves. With the largest increases occurring in the area of “spatial reasoning,” however, some researchers attribute them to escalating cognitive demands of an increasingly visual environment. Johnson agrees, and gives screen diversions most of the credit. With the proliferation of screens, people have grown up in a more visually challenging habitat. Brains have developed with more complex spatial stimuli than before, and teens experienced with screen technology now handle those IQ questions showing a chart of numbers with one vacant space—fill in the space with the correct number or shape—more adroitly than teens who are not.
 
 
IQ tests are controversial, of course, and questions of cultural bias, test-taking situations, and the fuzzy definition of intelligence have made them easy targets. Furthermore, research indicates that the bulk of the Flynn Effect has taken place in the lower percentiles, which have raised the average but not touched the upper tiers. Geniuses yesterday are just as smart as geniuses today. Still, the overall advances in IQ scores signify too sweeping and consistent a change to downplay, and the idea that more visual stimuli in children’s lives should yield higher visual aptitudes makes intuitive sense. With IQ scores complementing it, and so many voices in academia, foundations, popular journalism, and the entertainment industries pushing the point, the connection of screen time to new learning styles, visual intelligence, media literacy, and other cognitive progressions seems sure. It is reasonable to think that media-wise youths have minds more attuned to visual information, that multitasking hours grant them a more mobile consciousness.
 
 
Maybe that’s the case, but if so, then Johnson and other votaries of the screen have something else to explain. It’s the question we ended with in the previous chapter. Why haven’t knowledge and skill levels followed the same path? If cognitive talents rise correspondingly with the proliferation of screens and the sophistication of shows and games, why hasn’t a generation of historically informed, civically active, verbally able, and mathematically talented young adults come forth and proven the cultural pessimists and aged curmudgeons wrong?
 
 
Surprisingly, the IQ issue suggests an answer, and it works against those who invoke it to promote the benefits of the screen. For, while incontestable in its data, the Flynn Effect has an extraordinary generational implication. Assuming that scores have indeed risen three points per decade, psychologist Ulric Neisser laid out the repercussions in a 1997 article in
American Scientist:

Other books

The Clean Slate Accord by Sofia Diana Gabel
Touch the Sun by Wright, Cynthia
Winning Souls by Viola Grace
Bound by O'Rourke, Erica
Annie's Adventures by Lauren Baratz-Logsted
The Island by Elin Hilderbrand
The Real Cost of Fracking by Michelle Bamberger, Robert Oswald