Read The Most Human Human Online
Authors: Brian Christian
The Most Human Computer award in 2009 goes to David Levy—the same David Levy whose politically obsessed “Catherine” took the prize in 1997. Levy’s an intriguing guy: he was one of the big early figures in the computer chess scene of the 1980s, and was one of the organizers of the Marion Tinsley–Chinook checkers matches that preceded the Kasparov–Deep Blue showdown in the ’90s. He’s also the author of the recent nonfiction book
Love and Sex with Robots
, to give you an idea of the other sorts of things that are on his mind when he’s not competing for the Loebner Prize.
Levy stands up, to applause, accepts the award from Philip Jackson and Hugh Loebner, and makes a short speech about the importance of AI to a bright future, and the importance of the Loebner Prize to AI. I know what’s next on the agenda, and my stomach knots despite itself in the second of interstitial silence before Philip takes back the microphone. I’m certain that Doug’s gotten it; he and the Canadian judge were talking NHL from the third sentence in their conversation.
Ridiculous Canadians and their ice hockey
, I’m thinking. Then I’m thinking how ridiculous it is that I’m even allowing myself to get this worked up about some silly award—granted, I flew all the way out here to compete for it. Then I’m thinking how ridiculous it is to fly five thousand miles just to have an hour’s worth of instant messaging conversation. Then I’m thinking how maybe it’ll be great to be the
runner-up; I can obsessively scrutinize the transcripts in the book if I want and seem like an underdog, not a gloater. I can figure out what went wrong. I can come back next year, in Los Angeles, with the home-field cultural advantage, and finally show—
“And the results here show also the identification of the
human
that the judges rated ‘most human,’ ” Philip announces, “which as you can see was ‘Confederate 1,’ which was Brian Christian.”
And he hands me the Most Human Human award.
I didn’t know what to feel about it, exactly. It seemed strange to treat it as meaningless or trivial: I had, after all, prepared quite seriously, and that preparation had, I thought, paid off. And I found myself surprisingly invested in the outcome—how I did individually, yes, but also how the four of us did together. Clearly there was
something
to it all.
On the other hand, I felt equal discomfort regarding my new prize as
significant
—a true measure of
me
as a person—a thought that brought with it feelings of both pride (“Why, I
am
an excellent specimen, and it’s kind of you to say so!”) and guilt: if I
do
treat this award as “meaning something,” how do I act around these three people, my only friends for the next few days of the conference, people judged to be
less human
than myself? What kind of dynamic would that create? (Answer: mostly they just teased me.)
Ultimately, I let that particular question drop: Doug, Dave, and Olga were my comrades far more than they were my foes, and together we’d avenged the mistakes of 2008 in dramatic fashion. 2008’s confederates had given up a total of five votes to the computers, and almost allowed one to hit Turing’s 30 percent mark, making history. But between us, we hadn’t permitted a
single
vote to go the machines’ way. 2008 was a nail-biter; 2009 was a rout.
At first this felt disappointing, anticlimactic. There were any number of explanations: there were fewer rounds in ’09, so there were
simply fewer opportunities for deceptions. The strongest program from ’08 was Elbot, the handiwork of a company called Artificial Solutions, one of many new businesses leveraging chatbot technology to “allow our clients to offer better customer service at lower cost.” After Elbot’s victory at the Loebner Prize competition and the publicity that followed, the company decided to prioritize the Elbot software’s more commercial applications, and so it wouldn’t be coming to the ’09 contest as returning champion. In some ways it would have been more dramatic to have a closer fight.
In another sense, though, the results were quite dramatic indeed. We think of science as an unhaltable, indefatigable advance: the idea that the Macs and PCs for sale next year would be slower, clunkier, heavier, and more expensive than this year’s models is laughable. Even in fields where computers were being matched up to a human standard, such as chess, their advance seemed utterly linear—inevitable, even. Maybe that’s because humans were already about as good at these things as they ever were and will ever be. Whereas in conversation it seems we are so complacent so much of the time, so smug, and with so much room for improvement—
In an article about the Turing test, Loebner Prize co-founder Robert Epstein writes, “One thing is certain: whereas the confederates in the competition will never get any smarter, the computers will.” I agree with the latter, and couldn’t disagree more strongly with the former.
Garry Kasparov says, “Athletes often talk about finding motivation in the desire to meet their own challenges and play their own best game, without worrying about their opponents. Though there is some truth to this, I find it a little disingenuous. While everyone has a unique way to get motivated and stay that way, all athletes thrive on competition, and that means beating someone else, not just setting a personal best … We all work harder, run faster, when we know someone is right on our heels … I too would have been unable to reach my potential without a nemesis like Karpov breathing down my neck and pushing me every step of the way.”
Some people imagine the future of computing as a kind of heaven. Rallying behind an idea called the “Singularity,” people like Ray Kurzweil (in
The Singularity Is Near
) and his cohort of believers envision a moment when we make machines smarter than ourselves, who make machines smarter than themselves, and so on, and the whole thing accelerates exponentially toward a massive ultra-intelligence that we can barely fathom. This time will become, in their view, a kind of techno-rapture, where humans can upload their consciousnesses onto the Internet and get assumed, if not bodily, then at least mentally, into an eternal, imperishable afterlife in the world of electricity.
Others imagine the future of computing as a kind of hell. Machines black out the sun, level our cities, seal us in hyperbaric chambers, and siphon our body heat forever.
Somehow, even during my Sunday school days, hell always seemed a little bit unbelievable to me, over the top, and heaven, strangely boring. And both far too static. Reincarnation seemed preferable to either. To me the real, in-flux, changeable and changing world seemed far more interesting, not to mention fun. I’m no futurist, but I suppose, if anything, I prefer to think of the long-term future of AI as neither heaven nor hell but a kind of purgatory: the place where the flawed, good-hearted go to be purified—and tested—and to come out better on the other side.
As for the final verdict on the Turing test itself, in 2010, 2011, and thereafter—
If, or when, a computer wins the gold (
solid
gold, remember) Loebner Prize medal, the Loebner Prize will be discontinued forever. When Garry Kasparov defeated Deep Blue, rather convincingly, in their first encounter in ’96, he and IBM readily agreed to return the next year for a rematch. When Deep Blue beat Kasparov (rather less convincingly, I might add) in ’97, Kasparov proposed another rematch for ’98, but IBM would have none of it. They immediately unplugged
Deep Blue, dismantled it, and boxed up the logs they’d promised to make public.
1
Do you get the unsettling image, as I do, of the heavyweight challenger who, himself, rings the round-ending bell?
The implication seems to be that—because technological evolution seems to occur so much faster than biological evolution, years to millennia—once
Homo sapiens
is overtaken, it won’t be able to catch up. Simply put,
the Turing test, once passed, is passed forever
. Frankly, I don’t buy it.
IBM’s odd anxiousness to basically get out of Dodge after the ’97 match suggests a kind of insecurity on their part that I think is very much to the point. The fact is, the human race got to rule the earth—okay, technically, bacteria rule the earth, if you look at biomass, and population, and habitat diversity, but we’ll humor ourselves—the fact is, the human race got to where it is by being the most adaptive, flexible, innovative, and quick-learning species on the planet. We’re not going to take defeat lying down.
No, I think that, while certainly the first year that computers pass the Turing test will be a historic, epochal one, it does not mark the end of the story. No, I think, indeed, that the
next
year’s Turing test will truly be the one to watch—the one where we humans, knocked to the proverbial canvas, must pull ourselves up; the one where we learn how to be
better
friends, artists, teachers, parents, lovers; the one where we
come back
. More human than ever. I want to be there for
that
.
And if not defeat, but further rout upon rout? I turn a last time to Kasparov. “Success is the enemy of future success,” he says. “One of the most dangerous enemies you can face is complacency. I’ve
seen—both in myself and my competitors—how satisfaction can lead to a lack of vigilance, then to mistakes and missed opportunities … Winning can convince you everything is fine even if you are on the brink of disaster … In the real world, the moment you believe you are entitled to something is exactly when you are ripe to lose it to someone who is fighting harder.”
If there’s one thing I think the human race has been guilty of for a long time—since antiquity at least—it’s a kind of complacency, a kind of entitlement. This is why, for instance, I find it oddly invigorating to catch a cold, come down from my high horse of believing myself a member of evolution’s crowning achievement, and get whupped for a couple days by a single-celled organism.
A loss, and the reality check to follow, might do us a world of good.
Maybe the Most Human Human award isn’t one that breeds complacency. An “anti-method” doesn’t scale, so it can’t be “phoned in.” And a philosophy of site-specificity means that every new conversation, with every person, in every situation, is a new opportunity to succeed in a unique way—or to fail. Site-specificity doesn’t provide the kinds of laurels one can rest on.
It doesn’t matter whom you’ve talked to in the past, how much or how little that dialogue sparkled, what kudos or criticism, if any at all, you got for it.
I walk out of the Brighton Centre, to the bracing sea air for a minute, and into a small, locally owned shoe store looking for a gift to bring back home to my girlfriend; the shopkeeper notices my accent; I tell her I’m from Seattle; she is a grunge fan; I comment on the music playing in the store; she says it’s Florence + the Machine; I tell her I like it and that she would probably like Feist …
I walk into a tea and scone store called the Mock Turtle and order the British equivalent of coffee and a donut, except it comes with thirteen pieces of silverware and nine pieces of flatware; I am
so
in England, I think; an old man, probably in his eighties, is shakily eating
a pastry the likes of which I’ve never seen; I ask him what it is; “coffee meringue,” he says and remarks on my accent; an hour later he is telling me about World War II, the exponentially increasing racial diversity of Britain, that
House of Cards
is a pretty accurate depiction of British politics, minus the murders, but that really I should watch
Spooks;
do you get
Spooks
on cable, he is asking me …
I meet my old boss for dinner; and after a couple years of being his research assistant and occasionally co-author, and after a brief thought of becoming one of his Ph.D. students, after a year of our paths not really crossing, we negotiate whether our formerly collegial and hierarchical relationship, now that its context is removed, simply dries up or flourishes into a domain-general friendship; we are ordering appetizers and saying something about Wikipedia, something about Thomas Bayes, something about vegetarian dining …
Laurels are of no use. If you de-anonymized yourself in the past, great. But that was that. And now, you begin again.
1.
These logs
would
, three years later, be put on the IBM website, albeit in incomplete form and with so little fanfare that Kasparov himself wouldn’t find out about them until 2005.
The image-processing world, it turns out, has a close analogue to the Turing test, called “the Cornell box,” which is a small model of a room with one red wall and one green wall (the others are white) and two blocks sitting inside it. Developed by Cornell University graphics researchers in 1984, the box has evolved and become more sophisticated over time, as researchers attempt additional effects (reflection, refraction, and so on). The basic idea is that the researchers set up this room in real life, photograph it, and put the photographs online; graphics teams, naturally, try to get their
virtual
Cornell box renderings to look as much as possible like the real thing.
Of course this raises some great questions.
Graphics teams don’t use the Cornell box as a
competitive
standard, and there’s an assumption of good faith on their part when they show off their renderings. Obviously, one could simply scan the real photograph and have software output the image, pixel for pixel. As with the Turing test, a static demo won’t do. One needs some degree of “interaction” between the judges and the software—in this case, something like moving some of the internal boxes around, or changing the colors, or making one of the boxes reflective, and so on.
Second is that if this particular room is meant to stand in for
all
of visual reality—the way a Turing test is meant to stand in for all of language use—then we might ask certain questions about the room. What kind of light is trickiest? What types of surfaces are the hardest to virtualize? How, that is, do we get the real Cornell box to be a good confederate, the Most Room-Like Room?