Read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy Online

Authors: Cathy O'Neil

Tags: #Business & Economics, #General, #Social Science, #Statistics, #Privacy & Surveillance, #Public Policy, #Political Science

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (24 page)

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
7.15Mb size Format: txt, pdf, ePub
ads

A data-savvy insurer will note that cars traveling along that route in the wee hours have an increased risk of accidents. There are more than a few drunks on the road. And to be fair, our barista is adding a bit of risk by taking the shortcut and sharing the road with the people spilling out of the bars. One of them might hit her. But as far as the insurance company’s geo-tracker is concerned, not only is she mingling with drunks, she may
be
one.

In this way, even the models that track our personal behavior gain many of their insights, and assess risk, by comparing us to others. This time, instead of bucketing people who speak Arabic
or Urdu, live in the same zip codes, or earn similar salaries, they assemble groups of us who act in similar ways. The prediction is that those who act alike will take on similar levels of risk. If you haven’t noticed, this is birds of a feather all over again, with many of the same injustices.

When I talk to most people about black boxes in cars, it’s not the analysis they object to as much as the surveillance itself. People insist to me that they won’t give in to monitors. They don’t want to be tracked or have their information sold to advertisers or handed over to the National Security Agency. Some of these people might succeed in resisting this surveillance. But privacy, increasingly, will come at a cost.

In these early days, the auto insurers’ tracking systems are opt-in. Only those willing to be tracked have to turn on their black boxes.
They get rewarded with a discount of between 5 and 50 percent and the promise of more down the road. (And the rest of us subsidize those discounts with higher rates.) But as insurers gain more information, they’ll be able to create more powerful predictions. That’s the nature of the data economy. Those who squeeze out the most intelligence from this information, turning it into profits, will come out on top. They’ll predict group risk with greater accuracy (though individuals will always confound them). And the more they benefit from the data, the harder they’ll push for more of it.

At some point, the trackers will likely become the norm. And consumers who want to handle insurance the old-fashioned way, withholding all but the essential from their insurers, will have to pay a premium, and probably a steep one. In the world of WMDs, privacy is increasingly a luxury that only the wealthy can afford.

At the same time, surveillance will change the very nature of insurance. Insurance is an industry, traditionally, that draws on
the majority of the community to respond to the needs of an unfortunate minority. In the villages we lived in centuries ago, families, religious groups, and neighbors helped look after each other when fire, accident, or illness struck. In the market economy, we outsource this care to insurance companies, which keep a portion of the money for themselves and call it profit.

As insurance companies learn more about us, they’ll be able to pinpoint those who appear to be the riskiest customers and then either drive their rates to the stratosphere or, where legal, deny them coverage. This is a far cry from insurance’s original purpose, which is to help society balance its risk. In a targeted world, we no longer pay the average. Instead, we’re saddled with anticipated costs. Instead of smoothing out life’s bumps, insurance companies will demand payment for those bumps in advance. This undermines the point of insurance, and the hits will fall especially hard on those who can least afford them.

As insurance companies scrutinize the patterns of our lives and our bodies, they will sort us into new types of tribes. But these won’t be based on traditional metrics, such as age, gender, net worth, or zip code. Instead, they’ll be behavioral tribes, generated almost entirely by machines.

For a look at how such sorting will proliferate, consider a New York City
data company called Sense Networks. A decade ago, researchers at Sense began to analyze cell phone data showing where people went. This data, provided by phone companies in Europe and America, was anonymous: just dots moving on maps. (Of course, it wouldn’t have taken much sleuthing to associate one of those dots with the address it returned to every night of the week. But Sense was not about individuals; it was about tribes.)

The team fed this mobile data on New York cell phone users to
its machine-learning system but provided scant additional guidance. They didn’t instruct the program to isolate suburbanites or millennials or to create different buckets of shoppers. The software would find similarities on its own. Many of them would be daft—people who spend more than 50 percent of their days on streets starting with the letter J, or those who take most of their lunch breaks outside. But if the system explored millions of these data points, patterns would start to emerge. Correlations would emerge, presumably including many that humans would never consider.

As the days passed and Sense’s computer digested its massive trove of data, the dots started to take on different colors. Some turned toward red, others toward yellow, blue, and green. The tribes were emerging.

What did these tribes represent? Only the machine knew, and it wasn’t talking. “
We wouldn’t necessarily recognize what these people have in common,” said Sense’s cofounder and former CEO Greg Skibiski. “They don’t fit into the traditional buckets that we’d come up with.” As the tribes took on their colors, the Sense team could track their movements through New York. By day, certain neighborhoods would be dominated by blue, then turn red in the evening, with a sprinkling of yellows. One tribe, recalled Skibiski, seemed to frequent a certain spot late at night. Was it a dance club? A crack house? When the Sense team looked up the address, they saw it was a hospital. People in that tribe appear to be getting hurt more often, or sick. Or maybe they were doctors, nurses, and emergency medical workers.

Sense was sold in 2014 to YP, a mobile advertising company spun off from AT&T. So for the time being, its sorting will be used to target different tribes for ads. But you can imagine how machine-learning systems fed by different streams of behavioral data will be soon placing us not just into one tribe but into hun
dreds of them, even thousands. Certain tribes will respond to similar ads. Others may resemble each other politically or land in jail more frequently. Some might love fast food.

My point is that oceans of behavioral data, in coming years, will feed straight into artificial intelligence systems. And these will remain, to human eyes, black boxes. Throughout this process, we will rarely learn
about the tribes we “belong” to or why we belong there. In the era of machine intelligence, most of the variables will remain a mystery. Many of those tribes will mutate hour by hour, even minute by minute, as the systems shuttle people from one group to another. After all, the same person acts very differently at 8 a.m. and 8 p.m.

These automatic programs will increasingly determine how we are treated by the other machines, the ones that choose the ads we see, set prices for us, line us up for a dermatologist appointment, or map our routes. They will be highly efficient, seemingly arbitrary, and utterly unaccountable. No one will understand their logic or be able to explain it.

If we don’t wrest back a measure of control, these future WMDs will feel mysterious and powerful. They’ll have their way with us, and we’ll barely know it’s happening.

In 1943, at the height of World War II, when the American armies and industries needed every troop or worker they could find,
the Internal Revenue Service tweaked the tax code, granting tax-free status to employer-based health insurance. This didn’t seem to be a big deal, certainly nothing to rival the headlines about the German surrender in Stalingrad or Allied landings on Sicily. At the time, only about 9 percent of American workers received private health coverage as a job benefit. But with the new tax-free status, businesses set about attracting scarce workers by offering health
insurance. Within ten years, 65
percent of Americans would come under their employers’ systems. Companies already exerted great control over our finances. But in that one decade, they gained a measure of control—whether they wanted it or not—over our bodies.

Seventy-five years later, health care costs have metastasized and now consume $3
trillion per year.
Nearly one dollar of every five we earn feeds the vast health care industry.

Employers, which have long been nickel and diming workers to lower their costs, now have a new tactic to combat these growing costs. They call it “wellness.” It involves growing surveillance, including lots of data pouring in from the Internet of Things—the Fitbits, Apple Watches, and other sensors that relay updates on how our bodies are functioning.

The idea, as we’ve seen so many times, springs from good intentions. In fact, it is encouraged by the government.
The Affordable Care Act, or Obamacare, invites companies to engage workers in wellness programs, and even to “incentivize” health. By law, employers can now offer rewards and assess penalties reaching
as high as 50 percent of the cost of coverage. Now, according to a study by the Rand Corporation,
more than half of all organizations employing fifty people or more have wellness programs up and running, and more are joining the trend every week.

There’s plenty of justification for wellness programs. If they work—and, as we’ll see, that’s a big “if”—the biggest beneficiary is the worker and his or her family. Yet if wellness programs help workers avoid heart disease or diabetes, employers gain as well. The fewer emergency room trips made by a company’s employees, the less risky the entire pool of workers looks to the insurance company, which in turn brings premiums down. So if we can just look past the intrusions, wellness may appear to be win-win.

Trouble is, the intrusions cannot be ignored or wished away.
Nor can the coercion. Take
the case of Aaron Abrams. He’s a math professor at Washington and Lee University in Virginia. He is covered by
Anthem Insurance, which administers a wellness program. To comply with the program, he must accrue 3,250 “HealthPoints.” He gets one point for each “daily log-in” and 1,000 points each for an annual doctor’s visit and an on-campus health screening. He also gets points for filling out a “Health Survey” in which he assigns himself monthly goals, getting more points if he achieves them. If he chooses not to participate in the program, Abrams must pay an extra $50 per month toward his premium.

Abrams was hired to teach math. And now, like millions of other Americans, part of his job is follow a host of health dictates and to share that data not only with his employer but also with the third-party company that administers the program. He resents it, and he foresees the day when the college will be able to extend its surveillance. “It is beyond creepy,” he says, “to think of anyone reconstructing my daily movements based on my own ‘self-tracking’ of my walking.”

My fear goes a step further. Once companies amass troves of data on employees’ health, what will stop them from developing health scores and wielding them to sift through job candidates? Much of the proxy data collected, whether step counts or sleeping patterns, is not protected by law, so it would theoretically be perfectly legal. And it would make sense. As we’ve seen, they routinely reject applicants on the basis of credit scores and personality tests. Health scores represent a natural—and frightening—next step.

Already, companies are establishing ambitious health standards for workers and penalizing them if they come up short.
Michelin, the tire company, sets its employees goals for metrics ranging from blood pressure to glucose, cholesterol, triglycerides, and waist size. Those who don’t reach the targets in three categories have to pay an extra $1,000 a year toward their health insurance. The national
drugstore chain
CVS announced in 2013 that it would require employees to report their levels of body fat, blood sugar, blood pressure, and cholesterol—or pay $600 a year.

The CVS move prompted this angry response from Alissa Fleck, a columnist at Bitch Media: “
Attention everyone, everywhere. If you’ve been struggling for years to get in shape, whatever that means to you, you can just quit whatever it is you’re doing right now because CVS has got it all figured out. It turns out whatever silliness you were attempting, you just didn’t have the proper incentive. Except, as it happens, this regimen already exists and it’s called humiliation and fat-shaming. Have someone tell you you’re overweight, or pay a major fine.”

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
7.15Mb size Format: txt, pdf, ePub
ads

Other books

Missing: Presumed Dead by James Hawkins
Little Girl Gone by Brett Battles
Learning-to-Feel by N.R. Walker
Guarding a Notorious Lady by Olivia Parker
Ronnie and Nancy by Bob Colacello
Hunted by Adam Slater
The PMS Murder by Laura Levine
A la caza del amor by Nancy Mitford