Naked Economics (15 page)

Read Naked Economics Online

Authors: Charles Wheelan

BOOK: Naked Economics
8.98Mb size Format: txt, pdf, ePub

If I were to poll one hundred economists, nearly every one of them would tell me that significantly improving primary and secondary education in this country would lead to large economic gains. But the same group would be divided over whether or not we should spend more money on public education. Why? Because they would disagree sharply over whether pouring more money into the existing system would improve student outcomes.

 

 

Some government activity shrinks the size of the pie but still may be socially desirable.
Transferring money from the rich to the poor is technically “inefficient” in the sense that sending a check for $1 to a poor family may cost the economy $1.25 when the deadweight costs of taxation are taken into account. The relatively high taxation necessary to support a strong social safety net falls most heavily on those with productive assets, including human capital, making countries like France a good place to be a child born into a poor family and a bad place to be an Internet entrepreneur (which in turn makes it a bad place to be a high-tech worker). Overall, policies that guarantee some pie for everybody will slow the growth of the pie itself. Per capita income in the United States is higher than per capita income in France; the United States also has a higher proportion of children living in poverty.

Having said all that, reasonable people can disagree over the appropriate level of social spending. First, they may have different preferences about how much wealth they are willing to trade off for more equality. The United States is a richer but more unequal place than most of Europe. Second, the notion of a simple trade-off between wealth and equality oversimplifies the dilemma of helping the most disadvantaged. Economists who care deeply about the poorest Americans may disagree over whether the poor would be helped more by expensive government programs, such as universal health care, or by lower taxes that would encourage economic growth and put more low-income Americans to work at higher wages.

 

 

Last, some government involvement in the economy is purely destructive.
Heavy-handed government can be like a millstone around the neck of a market economy. Good intentions can lead to government programs and regulations whose benefits are grossly outweighed by their costs. Bad intentions can lead to all kinds of laws that serve special interests or corrupt politicians. This is especially true in the developing world, where much good could be done just by getting government out of areas of the economy where it does not belong. As Jerry Jordan, former president and CEO of the Federal Reserve Bank of Cleveland, has noted, “What separates the economic ‘haves’ from the ‘have-nots’ is whether the role of an economy’s institutions—particularly its public institutions—is to facilitate production or to confiscate it.”
17

In short, government is like a surgeon’s scalpel: It is an intrusive tool that can be used for good or for ill. Wielded carefully and judiciously, it will facilitate the body’s remarkable ability to heal itself. In the wrong hands, or wielded overzealously with even the best of intentions, it can cause great harm.

CHAPTER
5
 
Economics of Information:
 

McDonald’s didn’t create a better hamburger

 

W
hen Bill Clinton ran for president in 1992, he floated the idea of Hope Scholarships. The Clinton plan (based on an earlier experiment at Yale) was seemingly elegant: Students could borrow money for college and then repay the loans after graduation with a percentage of their annual income rather than the usual fixed payments of principal plus interest. Graduates who went on to become investment bankers would owe more in student loans than graduates who counseled disadvantaged teens in poor neighborhoods, which was exactly the point. The plan was designed to address the concern that students graduating with large debts are forced to do well rather than do good. After all, it is hard to become a teacher or a social worker after graduating with $75,000 in student loans.

In theory, the program would finance itself. Administrators could determine the average postgraduation salary for eligible students and then calculate the percentage of income they would have to pay in order for the program to recoup its costs—say 1.5 percent of annual income for fifteen years. Students who became brain surgeons would pay back more than average; students who fought tropical diseases in Togo would pay less. On average, the high and low earners would cancel each other out and the program would break even.

There was just one problem: The Hope Scholarships had no hope of working, at least not without a large, ongoing government subsidy. The problem was a crucial asymmetry of information: Students know more about their future career plans than loan administrators do. College students never know their future plans with certainty, but most have a good idea whether their postgraduation income will be more or less than average—which is enough to determine if a Hope Scholarship would be more or less expensive than a conventional loan. Aspiring Wall Street barons would avoid the program because it’s a bad deal for them. Who wants to pay back 1.5 percent of $5 million every year for fifteen years when a conventional loan would be much cheaper? Meanwhile, the world’s future kindergarten teachers and Peace Corps volunteers would opt in.

The result is called adverse selection; future graduates sort themselves in or out of the program based on private information about their career plans. In the end, the program attracts predominantly low earners. The repayment calculations, based on the average postgraduation salary, no longer apply and the program cannot recover its costs. One may assume that Mr. Clinton ignored what his advisers almost certainly told him about the Yale experiment: It was quietly canceled after five years, both because repayments fell short of projections and because the administrative costs were prohibitive.

What we don’t know
can
hurt us. Economists study how we acquire information, what we do with it, and how we make decisions when all we get to see is a book’s cover. Indeed, the Swedish Academy of Sciences recognized this point in 2001 by awarding the Nobel Prize in Economics to George Akerlof, Michael Spence, and Joseph Stiglitz for their seminal work on the economics of information. Their work explores the problems that arise when rational people are forced to make decisions based on incomplete information, or when one party to a transaction knows more than another. Their insights are relevant to some of our most pressing social issues, from genetic screening to discrimination in the workplace.

Consider a small law firm interviewing two job candidates, one male and one female. Both candidates are recent Harvard Law School graduates and are eminently qualified for the position. If the “best” candidate for the job is the one who will earn the most money for the firm, which seems a reasonable assumption, then I will argue that the rational choice is to hire the man. The interviewer has no specific information on the family plans of the candidates at hand (and is forbidden by law from asking about them), but can make a reasonable inference based on what everyone knows about America at the beginning of the twenty-first century: Women still bear the bulk of child-rearing responsibilities. Demographics suggest that both candidates are likely to start families in the near future. Yet only the female candidate will take paid maternity leave. More important, she may not return to work after having the child, which leaves the firm with the cost of finding, hiring, and training another lawyer.

Is any of this certain? No. The male candidate may have dreams of staying home with his five children; the female candidate may have decided years ago that she has no interest in having children. But these are not the most likely scenarios. The female candidate is punished because the firm has no information on her specific circumstances but good data on broad social trends. Is this fair? No. (And it’s not legal either.)
Yet the firm’s logic makes sense.
In other words, it is rational to discriminate in this case, which turns the whole idea of discrimination on its head. Discrimination is usually irrational. As Nobel laureate Gary Becker pointed out in
The Economics of Discrimination,
employers with a “taste for discrimination” sacrifice profits because they pass over minorities in favor of less qualified whites.
1
A patient who refuses to see an eminent black doctor because of his skin color is a fool. A law firm that minimizes employee turnover by playing the statistical averages may offend our sensibilities and violate federal law—but it is not foolish.

When we approach this situation as an information problem, there are several crucial insights. First, firms are not the only villains. When professional women choose to have a child, take paid maternity leave, and then quit their companies, they impose a cost, arguably unfair, on their firms.
More important, they impose a cost on other women.
Firms that feel they have been “burned” by employees who take maternity leave and then quit are more likely to discriminate against young women in the hiring process (particularly those who are already pregnant) and less likely to offer generous maternity benefits. The good news is that there is a quick and easy solution: a generous but refundable maternity package. Keep it if you come back to work, return it if you don’t. That simple policy change gives us nearly everything we want. Firms no longer have to be concerned about paying benefits to women who will not return to work. Indeed, it becomes possible to offer more generous benefits without providing an incentive for workers to take the money and run. Women, in turn, do not face the same level of discrimination in the hiring process.

Statistical discrimination, or so-called “rational discrimination,” takes place when an individual makes an inference that is defensible based on broad statistical patterns but (1) is likely to be wrong in the specific case at hand; and (2) has a discriminatory effect on some group. Suppose an employer has no racial prejudice but does have an aversion to hiring workers with a criminal background. That’s certainly a reasonable preference, for all kinds of reasons. If this employer has to make a hiring decision without access to applicants’ criminal backgrounds (either because he doesn’t have the time or resources to gather such information, or perhaps because he is forbidden by law from asking), then it’s entirely plausible that he will discriminate against black male applicants, who are far more likely to have served time in prison (28 percent) than white male applicants (4 percent).

Of course, all this employer cares about is whether or not the person standing in front of him has a criminal record. If he can acquire that information with certainty, then the broader social patterns don’t matter. In theory, we would expect access to criminal background checks to reduce discrimination against black men without criminal records. In fact, that is what the data show us. A group of economists compared hiring decisions at firms that conduct criminal background checks with hiring decisions at firms that don’t. They concluded, “We find that employers who check criminal backgrounds are more likely to hire African-American workers, especially men. This effect is stronger among those employers who report an aversion to hiring those with criminal records than among those who do not.”
2

With race, more information is usually better. The corresponding implication is that less information can be worse. The United States has a huge ex-offender population. (America has a high incarceration rate, and most people who go to prison eventually get out; the median sentence is less than two years.) Policies that seek to help exoffenders by suppressing information on their criminal backgrounds may be bad for a much wider population. The authors of the study cited above warned that their results “suggest that curtailing access to criminal history records may actually harm more people than it helps and aggravate racial differences in labor market outcomes.”

 

 

This chapter is not about discrimination. It is about information, which lies at the heart of many discrimination-related problems. Information matters, particularly when we don’t have all that we need. Markets tend to favor the party that knows more. (Have you ever bought a used car?) But if the imbalance, or asymmetry of information, becomes too large, then markets can break down entirely. This was the fundamental insight of 2001 Nobel laureate George Akerlof, an economist at the University of California, Berkeley. His paper entitled “The Market for Lemons” used the used-car market to make its central point. Any individual selling a used car knows more about its quality than someone looking to buy it. This creates an adverse selection problem, just as it did with the Hope Scholarships. Car owners who are happy with their vehicles are less likely to sell them. Thus, used-car buyers anticipate hidden problems and demand a discount. But once there is a discount built into the market, owners of high-quality cars become even less likely to sell them—which guarantees the market will be full of lemons. In theory, the market for high-quality used cars will not work, much to the detriment of anyone who may want to buy or sell such a car. (In practice, such markets often do work for reasons explained by the gentlemen with whom Mr. Akerlof shared his Nobel prize; more on that in a moment.)

“The Market for Lemons” is characteristic of the kinds of ideas recognized by the Nobel committee. It is, in the words of the Royal Swedish Academy of Sciences, “a simple but profound and universal idea, with numerous implications and widespread applications.” Health care, for example, is plagued with information problems. Consumers of health care—the patients—almost always have less information about their care than their doctors do. Indeed, even after we see a doctor, we may not know whether we were treated properly. This asymmetry of information is at the heart of our health care woes.

Under any “fee for service” system, doctors charge a fee for each procedure they perform. Patients do not pay for these extra tests and procedures; their insurance companies (or the federal government, in the case of older Americans who are eligible for Medicare) do. At the same time, medical technology continues to present all kinds of new medical options, many of which are fabulously expensive. This combination is at the heart of rapidly rising medical costs: Doctors have an incentive to perform expensive medical procedures and patients have no reason to disagree. If you walk into your doctor’s office with a headache and the doctor suggests a CAT scan, you would almost certainly agree “just to be sure.” Neither you nor your doctor is acting unethically. When cost is not a factor, it makes perfect sense to rule out brain cancer even when the only symptom is a headache the morning after the holiday office party. Your doctor might also reasonably fear that if she doesn’t order a CAT scan, you might sue for big bucks later if something turns out to be wrong with your head.

Medical innovation is terrific in some cases and wasteful in others. Consider the current range of treatments for prostate cancer, a cancer that afflicts many older men. One treatment option is “watchful waiting,” which involves doing nothing unless and until tests show that the cancer is getting worse. This is a reasonable course of action because prostate cancer is so slow-growing that most men die of something else before the prostate cancer becomes a serious problem. Another treatment option is proton radiation therapy, which involves shooting atomic particles at the cancer using a proton accelerator that is roughly the size of a football field. Doing nothing essentially costs nothing (more or less); shooting protons from an accelerator costs somewhere in the range of $100,000.

The cost difference is not surprising; the shocking thing is that proton therapy has not been proven any more effective than watchful waiting. An analysis by the RAND Corporation concluded, “No therapy has been shown superior to another.”
3

Health maintenance organizations were designed to control costs by changing the incentives. Under many HMO plans, general practitioners are paid a fixed fee per patient per year, regardless of what services they provide. Doctors may be restricted in the kinds of tests and services they can prescribe and may even be paid a bonus if they refrain from sending their patients to see specialists. That changes things. Now when you walk into the doctor’s office (still at a disadvantage in terms of information about your own health) and say, “I’m dizzy, my head hurts, and I’m bleeding out my ear,” the doctor consults the HMO treatment guidelines and tells you to take two aspirin. As exaggerated as that example may be, the basic point is valid: The person who knows most about your medical condition may have an economic incentive to deny you care. Complaints about too much spending are replaced by complaints about too little spending. Every HMO customer has a horror story about wrangling with bureaucrats over acceptable expenses. In the most extreme (and anecdotal) stories, patients are denied lifesaving treatments by HMO bean counters.

Some doctors are willing to do battle with the insurance companies on behalf of their patients. Others simply break the rules by disguising treatments that are not covered by insurance as treatments that are. (Patients aren’t the only ones suffering from an asymmetry of information.) Politicians have jumped into the fray, too, demanding things like disclosure of the incentives paid to doctors by insurance companies and even a patient’s bill of rights.

Other books

The Balliols by Alec Waugh
Can You See Me? by Nikki Vale
Twilight's Encore by Jacquie Biggar
Terra Incognita by Sara Wheeler
Middle Men by Jim Gavin
Inconsolable by Ainslie Paton
Selected Stories by Robert Walser