Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (21 page)

Read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy Online

Authors: Cathy O'Neil

Tags: #Business & Economics, #General, #Social Science, #Statistics, #Privacy & Surveillance, #Public Policy, #Political Science

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
7.18Mb size Format: txt, pdf, ePub

It should come as little surprise that many institutions in our society, from big companies to the government, are on the hunt for people who are trustworthy and reliable. In the chapter on getting a job, we saw them sorting through résumés and red-lighting candidates whose psychological tests pointed to undesirable personal attributes. Another all-too-common approach is to consider the applicant’s credit score. If people pay their bills on time and avoid debt, employers ask, wouldn’t that signal trustworthiness and dependability? It’s not
exactly
the same thing, they know. But wouldn’t there be a significant overlap?

That’s how the credit reports have expanded far beyond their original turf. Creditworthiness has become an all-too-easy stand-in for other virtues. Conversely, bad credit has grown to signal a host of sins and shortcomings that have nothing to do with paying bills. As we’ll see, all sorts of companies turn credit reports into their own versions of credit scores and use them as proxies. This practice is both toxic and ubiquitous.

For certain applications, such a proxy might appear harmless. Some online dating services, for example, match people on the basis of credit scores. One of them, CreditScoreDating, proclaims that “
good credit scores are sexy.” We can debate the wisdom of linking financial behavior to love. But at least the customers of
CreditScoreD
ating know what they’re getting into and why. It’s up to them.

But if you’re looking for a job, there’s an excellent chance that a missed credit card payment or late fees on student loans could be working against you. According to a survey by the
Society for Human Resource Management, nearly half of America’s employers screen potential hires by looking at their credit reports. Some of them check the credit status of current employees as well, especially when they’re up for a promotion.

Before companies carry out these checks, they must first ask for permission. But that’s usually little more than a formality; at many companies, those refusing to surrender their credit data won’t even be considered for jobs. And if their credit record is poor, there’s a good chance they’ll be passed over. A 2012 survey on credit card debt in low- and middle-income families made this point all too clear. One in ten participants reported hearing from employers that blemished credit histories had sunk their chances, and it’s anybody’s guess how many were disqualified by their credit reports but left in the dark. While the law stipulates that employers must alert job seekers when credit issues disqualify them, it’s hardly a stretch to believe that some of them simply tell candidates that they weren’t a good fit or that others were more qualified.

The practice of using credit scores in hirings and promotions creates a dangerous poverty cycle. After all, if you can’t get a job because of your credit record, that record will likely get worse, making it even harder to land work. It’s not unlike the problem young people face when they look for their first job—and are disqualified for lack of experience. Or the plight of the longtime unemployed, who find that few will hire them because they’ve been without a job for too long. It’s a spiraling and defeating feedback loop for the unlucky people caught up in it.

Employers, naturally, have little sympathy for this argument.
Good credit, they argue, is an attribute of a responsible person, the kind they want to hire. But framing debt as a moral issue is a mistake. Plenty of hardworking and trustworthy people lose jobs every day as companies fail, cut costs, or move jobs offshore. These numbers climb during recessions. And many of the newly unemployed find themselves without health insurance. At that point, all it takes is an accident or an illness for them to miss a payment on a loan. Even with the Affordable Care Act, which reduced the ranks of the uninsured, medical expenses remain
the single biggest cause of bankruptcies in America.

People with savings, of course, can keep their credit intact during tough times. Those living from paycheck to paycheck are far more vulnerable. Consequently, a sterling credit rating is not just a proxy for responsibility and smart decisions. It is also a proxy for wealth. And wealth is highly correlated with race.

Consider this. As of 2015,
white households held on average roughly ten times as much money and property as black and Hispanic households. And while
only 15 percent of whites had zero or negative net worth, more than a third of blacks and Hispanic households found themselves with no cushion. This wealth gap increases with age. By their sixties, whites are eleven times richer than African Americans. Given these numbers, it is not hard to argue that the poverty trap created by employer credit checks affects society unequally and along racial lines. As I write this,
ten states have passed legislation to outlaw the use of credit scores in hiring. In banning them, the New York City government declared that using credit checks “disproportionately affects low-income applicants and applicants of color.” Still, the practice remains legal in forty states.

This is not to say that personnel departments across America are intentionally building a poverty trap, much less a racist one. They no doubt believe that credit reports hold relevant facts that
help them make important decisions. After all, “The more data, the better” is the guiding principle of the Information Age. Yet in the name of fairness, some of this data should remain uncrunched.

Imagine for a moment that you’re a recent graduate of Stanford University’s law school and are interviewing for a job at a prestigious law firm in San Francisco. The senior partner looks at his computer-generated file and breaks into a laugh. “It says here that you’ve been arrested for running a meth lab in Rhode Island!” He shakes his head. Yours is a common name, and computers sure make silly mistakes. The interview proceeds.

At the high end of the economy, human beings tend to make the important decisions, while relying on computers as useful tools. But in the mainstream and, especially, in the lower echelons of the economy, much of the work, as we’ve seen, is automated. When mistakes appear in a dossier—and they often do—even the best-designed algorithms will make the wrong decision. As data hounds have long said: garbage in, garbage out.

A person at the receiving end of this automated process can suffer the consequences for years. Computer-generated terrorism no-fly lists, for example, are famously rife with errors. An innocent person whose name resembles that of a suspected terrorist faces a hellish ordeal every time he has to get on a plane. (Wealthy travelers, by contrast, are often able to pay to acquire “trusted traveler” status, which permits them to waltz through security. In effect, they’re spending money to shield themselves from a WMD.)

Mistakes like this pop up everywhere.
The Federal Trade Commission reported in 2013 that 5 percent of consumers—or an estimated ten million people—had an error on one of their credit reports serious enough to result in higher borrowing costs. That’s troublesome, but at least credit reports exist in the regulated side
of the data economy. Consumers can (and should) request to see them once a year and amend potentially costly errors.
*

Still, the unregulated side of the data economy is even more hazardous. Scores of companies, from giants like Acxiom Corp. to a host of fly-by-night operations, buy information from retailers, advertisers, smartphone app makers, and companies that run sweepstakes or operate social networks in order to assemble a cornucopia of facts on every consumer in the country. They might note, for example, whether a consumer has diabetes, lives in a house with a smoker, drives an SUV, or owns a pair of collies (who may live on in the dossier long after their earthly departure). These companies also scrape all kinds of publicly available government data, including voting and arrest records and housing sales. All of this goes into a consumer profile, which they sell.

Some data brokers, no doubt, are more dependable than others. But any operation that attempts to profile hundreds of millions of people from thousands of different sources is going to get a lot of the facts wrong. Take the case of
a Philadelphian named Helen Stokes. She wanted to move into a local senior living center but kept getting rejected because of arrests on her background record. It was true that she had been arrested twice during altercations with her former husband. But she had not been convicted and had managed to have the records expunged from government databases. Yet the arrest records remained in files assembled by a company called RealPage, Inc., which provides background checks on tenants.

For RealPage and other companies like it, creating and selling reports brings in revenue. People like Helen Stokes are not
customers. They’re the product. Responding to their complaints takes time and costs money. After all, while Stokes might say that the arrests have been expunged, verifying that fact eats up time and money. An expensive human being might have to spend a few minutes on the Internet or even—heaven forbid—make a phone call or two. Little surprise, then, that Stokes didn’t get her record cleared until she sued. And even after RealPage responded, how many other data brokers might still be selling files with the same poisonous misinformation? It’s anybody’s guess.

Some data brokers do offer consumers access to their data. But these reports are heavily curated. They include the facts but not always the conclusions data brokers’ algorithms have drawn from them. Someone who takes the trouble to see her file at one of the many brokerages, for example, might see the home mortgage, a Verizon bill, and a $459 repair on the garage door. But she won’t see that she’s
in a bucket of people designated as “Rural and Barely Making It,” or perhaps “Retiring on Empty.” Fortunately for the data brokers, few of us get a chance to see these details. If we did, and the FTC is pushing for more accountability, the brokers would likely find themselves besieged by consumer complaints—millions of them. It could very well disrupt their business model. For now, consumers learn about their faulty files only when word slips out, often by chance.

An Arkansas resident named Catherine Taylor, for example, missed out on a job at the local Red Cross several years ago. Those things happen. But Taylor’s rejection letter arrived with a valuable nugget of information. Her background report included a criminal charge for the intent to manufacture and sell methamphetamines. This wasn’t the kind of candidate the Red Cross was looking to hire.

Taylor looked into it and discovered that the criminal charges belonged to another Catherine Taylor, who happened to be born
on the same day. She later found that at least ten other companies were tarring her with inaccurate reports—one of them connected to her application for federal housing assistance, which had been denied. Was the housing rejection due to a mistaken identity?

In an automatic process, it no doubt could have been. But a human being intervened. When applying for federal housing assistance, Taylor and her husband met with an employee of the housing authority to complete a background check. This employee, Wanda Taylor—no relation—was using information provided by Tenant Tracker, the data broker. It was riddled with errors and blended identities. It linked Taylor, for example, with the possible alias of Chantel Taylor, a convicted felon who happened to be born on the same day. It also connected her to the other Catherine Taylor she had heard about, who had been convicted in Illinois of theft, forgery, and possession of a controlled substance.

The dossier, in short, was a toxic mess. But Wanda Taylor had experience with such things. She began to dig through it. She promptly drew a line through the possible alias, Chantel, which seemed improbable to her. She read in the file that the Illinois thief had a tattoo on her ankle with the name Troy. After checking Catherine Taylor’s ankle, she drew a line through that felon’s name as well. By the end of the meeting, one conscientious human being had cleared up the confusion generated by web-crawling data-gathering programs. The housing authority knew which Catherine Taylor it was dealing with.

The question we’re left with is this: How many Wanda Taylors are out there clearing up false identities and other errors in our data? The answer: not nearly enough. Humans in the data economy are outliers and throwbacks. The systems are built to run automatically as much as possible. That’s the efficient way; that’s where the profits are. Errors are inevitable, as in any statistical program, but the quickest way to reduce them is to fine-tune the
algorithms running the machines. Humans on the ground only gum up the works.

This trend toward automation is leaping ahead as computers make sense of more and more of our written language, in some cases processing thousands of written documents in a second. But they still misunderstand all sorts of things. IBM’s
Jeopardy!
-playing supercomputer Watson, for all its brilliance, was flummoxed by language or context about 10 percent of the time. It was heard saying that
a butterfly’s diet was “Kosher,” and it once confused Oliver Twist, the Charles Dickens character, with the 1980s techno-pop band the Pet Shop Boys.

Such errors are sure to pile up in our consumer profiles, confusing and misdirecting the algorithms that manage more and more of our lives. These errors, which result from automated data collection, poison predictive models, fueling WMDs. And this collection will only grow. Computers are already busy expanding beyond the written word. They’re harvesting spoken language and images and using them to capture more information about everything in the universe—including us. These new technologies will mine new troves for our profiles, while expanding the risk for errors.

Other books

Resurrection by Treasure Hernandez
Kiro's Emily by Abbi Glines
A Little Bit Wicked by Rodgers, Joni, Chenoweth, Kristin
Blood Moon by Ellen Keener
Out at Home by Paul, JL
The Mayan Priest by Guillou, Sue
A Stranger's Touch by Anne Brooke