Read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy Online

Authors: Cathy O'Neil

Tags: #Business & Economics, #General, #Social Science, #Statistics, #Privacy & Surveillance, #Public Policy, #Political Science

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (25 page)

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
6.45Mb size Format: txt, pdf, ePub
ads

At the center of the weight issue is a discredited statistic, the body mass index. This is based on a formula devised two centuries ago by a Belgian mathematician,
Lambert Adolphe Jacques Quetelet, who knew next to nothing about health or the human body. He simply wanted an easy formula to gauge obesity in a large population. He based it on what he called the “average man.”

“That’s a useful concept,” writes
Keith Devlin, the mathematician and science author. “But if you try to apply it to any one person, you come up with the absurdity of a person with 2.4 children. Averages measure entire populations and often don’t apply to individuals.” Devlin adds that the BMI, with numerical scores, gives “mathematical snake oil” the air of scientific authority.

The BMI is a person’s weight in kilograms divided by their height in centimeters. It’s a crude numerical proxy for physical fitness. It’s more likely to conclude that women are overweight. (After all, we’re not “average” men.) What’s more, because fat weighs less than muscle, chiseled athletes often have sky-high BMIs. In the alternate BMI universe, LeBron James qualifies as overweight. When economic “sticks and carrots” are tied to BMI, large groups of workers are penalized for the kind of body they
have. This comes down especially hard on black women, who often have high BMIs.

But isn’t it a good thing, wellness advocates will ask, to help people deal with their weight and other health issues? The key question is whether this help is an offer or a command. If companies set up free and voluntary wellness programs, few would have reason to object. (And workers who opt in to such programs do, in fact, register gains, though they might well have done so without them.) But tying a flawed statistic like BMI to compensation, and compelling workers to mold their bodies to the corporation’s ideal, infringes on freedom. It gives companies an excuse to punish people they don’t like to look at—and to remove money from their pockets at the same time.

All of this is done in the name of health. Meanwhile,
the $6 billion wellness industry trumpets its successes loudly—and often without offering evidence. “
Here are the facts,” writes Joshua Love, president of Kinema Fitness, a corporate wellness company. “Healthier people work harder, are happier, help others and are more efficient. Unhealthy workers are generally sluggish, overtired and unhappy, as the work is a symptom of their way of life.”

Naturally, Love didn’t offer a citation for these broad assertions. And yet even if they were true, there’s scant evidence that mandatory wellness programs actually make workers healthier. A research report from the
California Health Benefits Review Program concludes that corporate wellness programs fail to lower the average blood pressure, blood sugar, or cholesterol of those who participate in them. Even when people succeed in losing weight on one of these programs, they tend to gain it back. (The one area where wellness programs do show positive results is in quitting smoking.)

It also turns out that wellness programs, despite well-publicized
individual successes, often don’t lead to lower health care spending. A 2013
study headed by Jill Horwitz, a law professor at UCLA, rips away the movement’s economic underpinning. Randomized studies, according to the report, “raise doubts” that smokers and obese workers chalk up higher medical bills than others. While it is true that they are more likely to suffer from health problems, these tend to come later in life, when they’re off the corporate health plan and on Medicare. In fact, the greatest savings from wellness programs come from the penalties assessed on the workers. In other words, like scheduling algorithms, they provide corporations with yet another tool to raid their employees’ paychecks.

Despite my problems with wellness programs, they don’t (yet) rank as full WMDs. They’re certainly widespread, they intrude on the lives of millions of employees, and they inflict economic pain. But they are not opaque, and, except for the specious BMI score, they’re not based on mathematical algorithms. They’re a simple and widespread case of wage theft, one wrapped up in flowery health rhetoric.

Employers are already overdosing on our data. They’re busy using it, as we’ve seen, to score us as potential employees and as workers. They’re trying to map our thoughts and our friendships and predict our productivity. Since they’re already deeply involved in insurance, with workforce health care a major expense, it’s only natural that they would extend surveillance on a large scale to workers’ health. And if companies cooked up their own health and productivity models, this could grow into a full-fledged WMD.

 

As you know by now, I am outraged by all sorts of WMDs. So let’s imagine that I decide to launch a campaign for tougher regulations on them, and I post a petition on my Facebook page. Which of my friends will see it on their news feed?

I have no idea. As soon as I hit send, that petition belongs to Facebook, and the social network’s algorithm makes a judgment about how to best use it. It calculates the odds that it will appeal to each of my friends. Some of them, it knows, often sign petitions, and perhaps share them with their own networks. Others tend to scroll right past. At the same time, a number of my friends pay more attention to me and tend to click the articles I post. The
Facebook algorithm takes all of this into account as it decides who will see my petition. For many of my friends, it will be buried so low on their news feed that they’ll never see it.

This is what happens when the immensely powerful network we share with 1.5 billion users is also a publicly traded corporation. While Facebook may feel like a modern town square, the company determines, according to its own interests, what we see and learn on its social network. As I write this, about
two-thirds of American adults have a profile on Facebook. They spend
thirty-nine minutes a day on the site, only four minutes less than they dedicate to face-to-face socializing.
Nearly half of them, according to a Pew Research Center report, count on Facebook to deliver at least some of their news, which leads to the question: By tweaking its algorithm and molding the news we see, can Facebook game the political system?

The company’s own researchers have been looking into this. During the 2010 and 2012 elections,
Facebook conducted experiments to hone a tool they called the “voter megaphone.” The idea was to encourage people to spread word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans—more than sixty-one million of them—to carry out their civic duty and make their voices heard. What’s more, by posting about people’s voting behavior, the site was stoking peer pressure to vote. Studies have shown that
the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgment of friends and neighbors.

At the same time, Facebook researchers were studying how different types of updates influenced people’s voting behavior. No researcher had ever worked in a human laboratory of this scale. Within hours, Facebook could harvest information from tens of millions of people, or more, measuring the impact that their
words and shared links had on each other. And it could use that knowledge to influence people’s actions, which in this case happened to be voting.

That’s a significant amount of power. And Facebook is not the only company to wield it. Other publicly held corporations, including Google, Apple, Microsoft, Amazon, and cell phone providers like Verizon and AT&T, have vast information on much of humanity—and the means to steer us in any way they choose.

Usually, as we’ve seen, they’re focused on making money. However, their profits are tightly linked to government policies. The government regulates them, or chooses not to, approves or blocks their mergers and acquisitions, and sets their tax policies (often turning a blind eye to the billions parked in offshore tax havens). This is why tech companies, like the rest of corporate America,
inundate Washington with lobbyists and quietly pour hundreds of millions of dollars in contributions into the political system. Now they’re gaining the wherewithal to fine-tune our political behavior—and with it the shape of American government—just by tweaking their algorithms.

The Facebook campaign started out with a constructive and seemingly innocent goal: to encourage people to vote. And it succeeded. After comparing voting records, researchers estimated that
their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.
George W. Bush, after all, won in 2000 by a margin of 537 votes in Florida. The activity of a single Facebook algorithm on Election Day, it’s clear, could not only change the balance of Congress but also decide the presidency.

Facebook’s potency comes not only from its reach but also from its ability to use its own customers to influence their friends. The vast majority of the sixty-one million people in the experiment received a message on their news feed encouraging them to vote.
The message included a display of photos: six of the user’s Facebook friends, randomly selected, who had clicked the “I Voted” button. The researchers also studied two control groups, each numbering around six hundred thousand. One group saw the “I Voted” campaign, but without the pictures of friends. The other received nothing at all.

By sprinkling its messages through the network, Facebook was studying the impact of friends’ behavior on our own. Would people encourage their friends to vote, and would this affect their behavior? According to the researchers’ calculations, seeing that friends were participating made all the difference. People paid much more attention when the “I Voted” updates came from friends, and they were more likely to share those updates. About 20 percent of the people who saw that their friends had voted also clicked on the “I Voted” button. Among those who didn’t get the button from friends, only 18 percent did. We can’t be sure that all the people who clicked the button actually voted, or that those who didn’t click it stayed home. Still, with sixty-one million potential voters on the network, a possible difference of two points can be huge.

Two years later, Facebook took a step further. For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company, Solomon Messing,
altered the news feed algorithm for about two million people, all of them politically engaged. These people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney World. If their friends shared a news story, it showed up high on their feed.

Messing wanted to see if getting more news from friends changed people’s political behavior. Following the election, Messing sent out surveys. The self-reported results indicated that the voter participation in this group inched up from 64 to 67 percent.
“When your friends deliver the newspaper,” said
Lada Adamic, a computational social scientist at Facebook, “interesting things happen.” Of course, it wasn’t really the friends delivering the newspaper, but Facebook itself. You might argue that newspapers have exerted similar power for eons. Editors pick the front-page news and decide how to characterize it. They choose whether to feature bombed Palestinians or mourning Israelis, a policeman rescuing a baby or battering a protester. These choices can no doubt influence both public opinion and elections. The same goes for television news. But when the
New York Times
or CNN covers a story, everyone sees it. Their editorial decision is clear, on the record. It is not opaque. And people later debate (often on Facebook) whether that decision was the right one.

Facebook is more like the Wizard of Oz: we do not see the human beings involved. When we visit the site, we scroll through updates from our friends. The machine appears to be only a neutral go-between. Many people still believe it is. In 2013, when a University of Illinois researcher named
Karrie Karahalios carried out a survey on Facebook’s algorithm, she found that 62 percent of the people were unaware that the company tinkered with the news feed. They believed that the system instantly shared everything they posted with all of their friends.

The potential for Facebook to hold sway over our politics extends beyond its placement of news and its Get Out the Vote campaigns. In 2012,
researchers experimented on 680,000 Facebook users to see if the updates in their news feeds could affect their mood. It was already clear from laboratory experiments that moods are contagious. Being around a grump is likely to turn you into one, if only briefly. But would such contagions spread online?

Using linguistic software, Facebook sorted positive (stoked!) and negative (bummed!) updates. They then reduced the volume of downbeat postings in half of the news feeds, while reducing the
cheerful quotient in the others. When they studied the users’ subsequent posting behavior, they found evidence that the doctored new feeds had indeed altered their moods. Those who had seen fewer cheerful updates produced more negative posts. A similar pattern emerged on the positive side.

Their conclusion: “Emotional states can be transferred to others…, leading people to experience the same emotions without their awareness.” In other words, Facebook’s algorithms can affect how millions of people feel, and those people won’t know that it’s happening. What would occur if they played with people’s emotions on Election Day?

BOOK: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
6.45Mb size Format: txt, pdf, ePub
ads

Other books

Love With a Scandalous Lord by Lorraine Heath
Chill of Night by John Lutz
Cop Killer by Sjöwall, Maj, Wahlöö, Per
Bear Grylls by Bear Grylls