Read Moneyball (Movie Tie-In Edition) (Movie Tie-In Editions) Online
Authors: Michael Lewis
Tags: #Sports & Recreation, #Business Aspects, #Baseball, #Statistics, #History, #Business & Economics, #Management
To a financial determinist like Bud Selig, the wonder must have been that they hadn’t simply given up. Of course, no one in pro sports ever admits to quitting. But it was perfectly possible to abandon all hope of winning and at the same time show up every day for work to collect a paycheck. Professional sports had a word for this: “rebuilding.” That’s what half a dozen big league teams did more or less all the time. The Kansas City Royals had been rebuilding for the past four or five years. Bud Selig’s Brewers had been taking a dive for at least a decade. The A’s didn’t do this, for the simple reason that they actually believed they were going to keep on winning—perhaps not so many games as they had in 2001, but enough to get themselves back to the play-offs.
Before the 2002 season, Paul DePodesta had reduced the coming six months to a math problem. He judged how many wins it would take to make the play-offs: 95. He then calculated how many more runs the Oakland A’s would need to score than they allowed to win 95 games: 135. (The idea that there was a stable relationship between season run totals and season wins was another Jamesean discovery.) Then, using the A’s players’ past performance as a guide, he made reasoned arguments about how many runs they would actually score and allow. If they didn’t suffer an abnormally large number of injuries, he said, the team would score between 800 and 820 runs and give up between 650 and 670 runs.
*
From that he predicted the team would win between 93 and 97 games and probably wind up in the play-offs. “There aren’t a lot of teams that win ninety-five games and don’t make it to the play-offs,” he said. “If we win ninety-five games and don’t make the play-offs, we’re fine with that.”
The 2001 Oakland A’s had won 102 regular season games. The 2002 Oakland A’s entered the season without the three players widely regarded by the market to be among their best and the expected result was a net loss of seven wins. How could that be? The only way to understand the math was to look a bit more closely at what, exactly, the team lost, or believed they lost, when other, richer teams hired away each of the three stars.
The first, and easiest, player to understand was their old flame-throwing closer, Jason Isringhausen. When Billy Beane had traded for him in the middle of the 1999 season, Isringhausen was pitching in the minor leagues with the New York Mets. To get him and a more expensive pitcher named Greg McMichael
and
the money to pay McMichael’s salary, all Billy Beane had given up was his own established closer, Billy Taylor. Taylor, who ceased to be an effective pitcher more or less immediately upon joining the Mets, Billy Beane had himself plucked from the minor leagues for a few thousand dollars a few years earlier.
The central insight that led him both to turn minor league nobodies into successful big league closers and to refuse to pay them the many millions a year they demanded once they became free agents was that it was more efficient to create a closer than to buy one. Established closers were systematically overpriced, in large part because of the statistic by which closers were judged in the marketplace: “saves.” The very word made the guy who achieved them sound vitally important. But the situation typically described by the save—the bases empty in the ninth inning with the team leading—was clearly far less critical than a lot of other situations pitchers faced. The closer’s statistic did not have the power of language; it was just a number. You could take a slightly above average pitcher and drop him into the closer’s role, let him accumulate some gaudy number of saves, and then sell him off. You could, in essence, buy a stock, pump it up with false publicity, and sell it off for much more than you’d paid for it. Billy Beane had already done it twice, and assumed he could do so over and over again.
Jason Isringhausen’s departure wasn’t a loss to the Oakland A’s but a happy consequence of a money machine known as “Selling the Closer.” In return for losing Isringhausen to the St. Louis Cardinals, the A’s had received two new assets: the Cardinals’ first-round draft pick, along with a first-round compensation pick. The former they’d used to draft Benjamin Fritz, a pitcher they judged to have a brighter and cheaper future than Isringhausen; the latter, to acquire Jeremy Brown.
The Blue Ribbon Commission had asked the wrong question. The question wasn’t whether a baseball team could keep its stars even after they had finished with their six years of indentured servitude and became free agents. The question was: how did a baseball team find stars in the first place, and could it find new ones to replace the old ones it lost? How fungible were baseball players? The short answer was: a lot more fungible than the people who ran baseball teams believed.
Finding pitchers who could become successful closers wasn’t all that difficult. To fill the hole at the back of his bullpen Billy had traded to the Toronto Blue Jays a minor league third baseman, Eric Hinske, for Billy Koch, another crude fireballer. He knew that Hinske was very good—he’d wind up being voted 2002 Rookie of the Year in the American League—but the Oakland A’s already had an even better third baseman, Eric Chavez. Plus, Billy knew that, barring some disaster, Koch, too, would gain a lot of value as an asset. Koch would get his saves and be perceived by other teams to be a much more critical piece of a successful team than he actually was, whereupon the A’s would trade him for something cheaper, younger, and possibly even better.
The loss of Johnny Damon, the A’s former center fielder, presented a different sort of problem. When Damon signed with Boston, the A’s took the Red Sox’s first-round pick (to select Nick Swisher) plus a compensation pick. But Damon left two glaring holes: on defense in center field, on offense in the leadoff spot. Of the two the offense was the easiest to understand, and dismiss. When fans watched Damon, they saw the sort of thrilling leadoff hitter that a team simply had to have if it wanted to be competitive. When the A’s front office watched Damon, they saw something else: an imperfect understanding of where runs come from.
Paul DePodesta had been hired by Billy Beane before the 1999 season, but well before that he had studied the question of why teams win. Not long after he’d graduated from Harvard, in the mid-nineties, he’d plugged the statistics of every baseball team from the twentieth century into an equation and tested which of them correlated most closely with winning percentage. He’d found only two, both offensive statistics, inextricably linked to baseball success: on-base percentage and slugging percentage. Everything else was far less important.
Not long after he arrived in Oakland, Paul asked himself a question: what was the relative importance of on-base and slugging percentage? His answer began with a thought experiment: if a team had an on-base percentage of 1.000 (referred to as “a thousand”)—that is, every hitter got on base—how many runs would it score?
*
An infinite number of runs, since the team would never make an out. If a team had a slugging percentage of 1.000—meaning, it gained a base for each hitter that came to the plate—how many runs would it score? That depended on how it was achieved, but it would typically be a lot less than an infinite number. A team might send four hitters to the plate in an inning, for instance. The first man hits a home run, the next three make outs. Four plate appearances have produced four total bases and thus a slugging percentage of 1.000 and yet have scored only one run in the inning.
Baseball fans and announcers were just then getting around to the Jamesean obsession with on-base and slugging percentages. The game, slowly, was turning its attention to the new statistic, OPS (on base plus slugging). OPS was the simple addition of on-base and slugging percentages. Crude as it was, it was a much better indicator than any other offensive statistic of the number of runs a team would score. Simply adding the two statistics together, however, implied they were of equal importance. If the goal was to raise a team’s OPS, an extra percentage point of on-base was as good as an extra percentage point of slugging.
Before his thought experiment Paul had felt uneasy with this crude assumption; now he saw that the assumption was absurd. An extra point of on-base percentage was clearly more valuable than an extra point of slugging percentage—but by how much? He proceeded to tinker with his own version of Bill James’s “Runs Created” formula. When he was finished, he had a model for predicting run production that was more accurate than any he knew of. In his model an extra point of on-base percentage was worth three times an extra point of slugging percentage.
Paul’s argument was radical even by sabermetric standards. Bill James and others had stressed the importance of on-base percentage, but even they didn’t think it was worth three times as much as slugging. Most offensive models assumed that an extra point of on-base percentage was worth, at most, one and a half times an extra point of slugging percentage. In major league baseball itself, where on-base percentage was not nearly so highly valued as it was by sabermetricians, Paul’s argument was practically heresy.
Paul walked across the hall from his office and laid out his argument to Billy Beane, who thought it was the best argument he had heard in a long time. Heresy was good: heresy meant opportunity. A player’s ability to get on base—especially when he got on base in unspectacular ways—tended to be dramatically underpriced in relation to other abilities. Never mind fielding skills and foot speed. The ability to get on base—to avoid making outs—was underpriced compared to the ability to hit with power. The one attribute most critical to the success of a baseball team was an attribute they could afford to buy. At that moment, what had been a far more than ordinary interest in a player’s ability to get on base became, for the Oakland A’s front office, an obsession.
To most of baseball Johnny Damon, on offense, was an extraordinarily valuable leadoff hitter with a gift for stealing bases. To Billy Beane and Paul DePodesta, Damon was a delightful human being, a pleasure to have around, but an easily replaceable offensive player. His on-base percentage in 2001 had been .324, or roughly 10 points below the league average. True, he stole some bases, but stealing bases involved taking a risk the Oakland front office did not trust even Johnny Damon to take. The math of the matter changed with the situation, but, broadly speaking, an attempted steal had to succeed about 70 percent of the time before it contributed positively to run totals.
The offense Damon had provided the 2001 Oakland A’s was fairly easy to replace; Damon’s defense was not. The question was how to measure what the Oakland A’s lost when Terrence Long, and not Johnny Damon, played center field. The short answer was that they couldn’t, not precisely. But they could get closer than most to an accurate answer—or thought that they could. Something had happened since Bill James first complained about the meaninglessness of fielding statistics. That something was new information, and a new way of thinking about an old problem. Oddly, the impulse to do this thinking had arisen on Wall Street.
I
N THE EARLY
1980
S
, the U.S. financial markets underwent an astonishing transformation. A combination of computing power and intellectual progress led to the creation of whole new markets in financial futures and options. Options and futures were really just fragments of stocks and bonds, but the fragments soon became so arcane and inexplicable that Wall Street created a single word to describe them all: “derivatives.” In one big way these new securities differed from traditional stocks and bonds: they had a certain, precisely quantifiable, value. It was impossible for anyone to say what a simple stock or bond should be worth. Their value was a matter of financial opinion; they were worth whatever the market said they were worth. But
fragments
of a stock or bond, when you glued them back together, must be worth exactly what the stock or bond was worth. If they were worth more or less than the original article, the market was said to be “inefficient,” and a trader could make a fortune trading the fragments against the original.
For the better part of a decade there were huge, virtually risk-less profits to be made by people who figured this out. The sort of people who quickly grasped the math of the matter were not typical traders. They were highly trained mathematicians and statisticians and scientists who had abandoned whatever they were doing at Harvard or Stanford or MIT to make a killing on Wall Street. The fantastic sums of money hauled in by the sophisticated traders transformed the culture on Wall Street, and made quantitative analysis, as opposed to gut feel, the respectable way to go about making bets in the market. The chief economic consequence of the creation of derivative securities was to price risk more accurately, and distribute it more efficiently, than ever before in the long, risk-obsessed history of financial man. The chief social consequence was to hammer into the minds of a generation of extremely ambitious people a new connection between “inefficiency” and “opportunity,” and to reinforce an older one, between “brains” and “money.”
Ken Mauriello and Jack Armbruster had been part of that generation. Ken analyzed the value of derivative securities, and Jack traded them, for one of the more profitable Chicago trading firms. Their firm priced financial risk as finely as it had ever been priced. “In the late 1980s Kenny started looking at taking the same approach to Major League baseball players,” said Armbruster. “Looking at the places where the stats don’t tell the whole truth—or even lie about the situation.” Mauriello and Armbruster’s goal was to value the events that occurred on a baseball field more accurately than they ever had been valued. In 1994, they stopped analyzing derivatives and formed a company to analyze baseball players, called AVM Systems.
Ken Mauriello had seen a connection between the new complex financial markets and baseball: “the inefficiency caused by sloppy data.” As Bill James had shown, baseball data conflated luck and skill, and simply ignored a lot of what happened during a baseball game. With two outs and a runner on second base a pitcher makes a great pitch: the batter hits a bloop into left field that would have been caught had the left fielder not been Albert Belle. The shrewd runner at second base, knowing that Albert Belle is slow not just to the ball but also to the plate, beats the throw home. In the record books the batter was credited with having succeeded, the pitcher with having failed, and the left fielder and the runner with having been present on the scene. This was a grotesque failure of justice. The pitcher and runner deserved to have their accounts credited, the batter and left fielder to have theirs debited (the former should have popped out; the latter somehow had avoided committing an “error” and at the same time put runs on the board for the other team).