Read The Naked Future Online

Authors: Patrick Tucker

The Naked Future (10 page)

BOOK: The Naked Future
8.19Mb size Format: txt, pdf, ePub
ads

What Paul and Dredze did have was a ridiculous amount of data: a stockpile of 2 billion tweets that they built over a period of a year and a half. “Two billion over a year is a small sample, a lot of data, but a small sample,” Paul explains.

Once they compiled their 2-billion-tweet corpus they soon found they had another problem. It was too big—not too large for a program but too large for them to work with. If the program was going to learn, they would have to design lessons for it; more specifically establish a set of rules that the program could use to separate health-related tweets from non-health-related ones. To write those rules, they needed to whittle the corpus down significantly.

The next step was to figure out which illness-related words would be the most fruitful. “Words like the ‘flu' are strong. But names of really specific drugs return so few tweets they're not worth including,” Paul says. They looked at which of the 2 billion tweets were related to thirty thousand key words from Web MD and WrongDiagnosis.com, indicating sickness. This filtering and then classifying gave them a pile of just 11 million tweets. These contained such words as “flu,” “sick,” and various other terms that were related to illness but often used for other purposes (e.g.,“Web design class gives me a huge headache every time”).

These remaining 11 million tweets had to be classified or annotated by hand, a task that was monumental enough to be beyond the reach of a pair of linguists. But the emergence of crowd-sourcing platforms has reduced this sort of large-scale, highly redundant task to a chore that can be smashed up and instantaneously divided among thousands of people around the world through Amazon's
Mechanical Turk service. The cost, Paul recalls, was close to $100. Each of the 11 million tweets was labeled three times to guard against human error. If two of the Mechanical Turk labelers believed a somewhat ambiguous tweet was about health, the chances of the tweet's not being related to health was small.

The point of this exercise was not to relieve the program of the burden of having to learn for itself but to create a body of correct answers that the program could check itself against. In machine learning, this is called a training set. It's what the program uses to look up the answers to see if it's right or wrong.

Examples of machine learning used in practice go back to the 1950s, but only recently do we have enough material to train a computer model on virtually anything. This is a methodology breakthrough that is now possible only because of the Internet, where spontaneous data creation from users has taken the place of costly and laborious surveys.

What Paul and Dredze's program does is show health and flu trends in something closer to real time, telemetrically, rather than in future time. But remember, the future is a matter of perception, and perception on such matters as flu outbreak is shaped by reporting. Predicting what official CDC results will reveal two weeks before those results become public is an example of an area of a particular future's becoming more exposed where it had once been cloaked.

But let's return to our scenario in which Josh was told the identity of the person to whom he was going to give the flu. It also included a level of fine granularity, actionable intelligence on the likelihood, computed as a probability score, of direct person-to-person flu transmission. We aren't concerned with some big data problem, we're concerned about Josh!

A few years ago attempting to solve a problem so complex would have involved extremely expensive and elaborate socio-technical simulations carried out by people with degrees in nuclear physics. But that was before millions of people took up the habit of
broadcasting their present physical condition, their location, and their plans, all at once.

Working off Paul and Dredze's research, Adam Sadilek published two papers in the spring of 2012 in which he showed how to use geo-tagged tweets to discern—in real time—which one of your friends has a cold, deduce where he got it, and predict the likelihood of his giving it to you.

He applied Paul and Dredze's program that separates sick tweets from benign tweets, on top of a real-world setting. Sadilek looked at 15 million tweets, about 4 million of which had been geo-tagged, from 632,611 unique Twitter users in the city of New York. About half of those tweets (2.5 million) were from people who posted geo-tagged tweets more than a hundred times per month, so-called geo-active users. There were only about 6,000 of these people, but there were 31,874 friendships between them. Each person had an average of about 5 friends in the group.

That gave Sadilek a window into where these 6,000 people were, what they felt while they were there, with whom they then met, and for how long. Based on that information Sadilek's algorithm allowed him to predict 18 percent of all the flu cases that were going to occur between these 6,000 people over the course of the next month. The experiment was a proof of concept to show the algorithm worked.
19
Had he and his colleagues utilized a larger scale, they would have succeeded in predicting more
individual
instances of flu transmission in one month than any epidemiologist in history.

Naturally, we can see the limitations of this exercise. Sadilek admits that in 80 percent of the instances when one of the subjects actually became sick with the flu, the causality was opaque. And he acknowledges his algorithm is entirely dependent on infectious people geo-tagging complaints about their maladies on a social network. The percentage of people in New York likely to sick-tweet, as in
Don't know which is more congested, this subway car or my nose #sickinuptheFtrain
, is extremely small. Geo-taggers
represented 1 in 3,000 New Yorkers at the time of the experiment by Sadilek's calculation. Research from Brigham Young University and the Pew Research Center also shows that while more than 60 percent of Americans go online to look for health information, only about 15 percent actually post information about how they themselves are feeling, illnesses they've had, reactions to medications, and so on.
20
This is human nature: when we're sick, we don't share. And geo-taggers are a rare breed. These are people willing to release a ton of information about themselves, where they go, who they see, and how they feel. This isn't typical behavior . . . yet. But the number of people willing to make that sort of information public is much larger than it was three years ago and exponentially larger than it was ten years ago. This, in part, is why the Josh flu scenario is not far-fetched at all.

Breakthroughs such as these won't vanquish the flu. One persistent gap in coverage is flu transmission through surface contact (which accounts for 31 percent of flu transmissions). And these emerging capabilities also open up a new set of contentions and protocol issues. Take the simple school scenario outlined above. If you're a school administrator, you might respond very differently to a 10 percent chance of one child catching the flu than if you are that one child's parent. All administrators want to make the best possible choice for the greatest number of children under their supervision; all parents want to make the best decision for their own child. These motivations are sometimes in harmony but are often in conflict. If several children show up one morning with a high probability of giving a mild influenza to several other children, do you send the infected kids home? Do you alert the parents of the other children? Do you do nothing? If you're an employer and you know that several of your workers are going to come into the office with a mean bug, contaminate the office, and cause a lot productivity loss across your entire staff, do you instruct them—preemptively—to take a sick day? If you don't offer paid sick leave, do you change your policy or do you force the workers to take an unpaid day, effectively punishing them for being willing to show up for work
while under the weather? What if the sick person isn't your employee but your office mate? How do you ask for time off because someone
else
has a cold?

In the past, the basic approach that supervisors passed on to their employees took the form of: Use your best judgment. People get sick, sometimes they spread their sickness and sometimes they don't. Deal with it. The ultimate costs of one person's cold, the number of people they'll infect, has historically been very hard to calculate. In the naked future, we may not have the luxury to feign that sort of ignorance. Indeed, we may know
exactly
how much a particular person's cold will cost the moment it shows up. It's a number that will change depending on the action that we—as managers, workers, students, and parents—take. The most effective solution for any individual won't be the best solution for someone else. These arguments won't be solved easily. But because we can see the future not just of flu, but of Josh's flu, of yours and mine, at least we can begin having the debate.

When we're sick we hide our weaknesses and inflate our strengths because instinct tells us to do so. No one wants to be treated differently or seen as ill. And so we regard with suspicion and amusement those people who will post evidence of their maladies to social networks, to out themselves as sick. We go through life privately feeling one way while showing a different face publicly and calling this behavior normal. We may soon realize that those people willing to share how they feel are committing a noble and selfless act. In some ways, they're healthier than the rest of us.

CHAPTER 4

Fixing the Weather

IT'S
September 2011, and Wolfgang Wagner has just published his resignation as the editor of the journal
Remote Sensing.
To Wagner, this is a sad but simple matter. He has made a mistake. The journal had accepted an article arguing that because of previous errors in satellite modeling there was no clear way to tell if CO
2
was causing heat to become trapped in the atmosphere (a phenomenon called radiative forcing) or if the heat increases in the atmosphere were actually caused by clouds (radiative feedback).
1

Though the article's author, University of Alabama researcher Roy Spencer, had done some important pioneering work in collecting temperature data from orbital satellites, he had repeatedly been forced to publish corrections for many of his major findings. As a result, Spencer was not a well-regarded researcher in the climate science community.

But Roy Spencer is no dummy.

He sought out
Remote Sensing
precisely because it was
not
a journal dedicated to climate change but to the study of satellites
and the modeling of satellite data, a journal about
instruments.
His paper was a Trojan horse.

In his resignation letter, Wagner explains that Spencer's article hadn't been properly vetted and that it ignored other previous, contradictory findings. Wagner further says that in accepting the paper he had trusted Spencer's credentials as a former head of NASA's Microwave Sounding Unit, as well as the judgment of the journal's managing editor. He admits that he himself never really read the article before he decided to include it.

“Had I taken a look, I would have been able to see the problems right away,” Wagner told me. Indeed, almost as soon as the paper was published, researchers from around the world were quick to call it seriously flawed.

By that point it was too late. Roy Spencer had scored a major political victory. The publication of Spencer in a “peer-reviewed” science publication was quickly picked up by conservative media. On April 6 meteorologist Joe Bastardi appeared on
Fox and Friends
to declare that global warming had effectively been debunked. A representative from the Heartland Institute writing on the
Forbes
blog said that the inclusion of the paper in a peer-reviewed journal ended the global warming debate once and for all.
2

In the act of resigning, Wagner hoped to put an end to the controversy and also regain a normal teaching schedule. The job of editing an academic journal demands a great deal of work and the rewards are mostly personal at best. Wagner was not particularly attached to the title. He took up the position at the publisher's request and worked on a voluntary basis. “I had my own work to get back to,” Wolfgang told me. “I was very busy at the time.” His decision to remove himself from the masthead seemed an appropriate one in light of the error but not really a significant action.

Days later, Wagner's e-mail in-box was full of death threats. He describes these as “awful,” mostly anonymous, and emanating primarily from the United States, Australia, and the United Kingdom. “I was glad I lived in Vienna,” he says.

Spencer went on to claim on his blog that Wagner (whom he calls simply “the editor”) had been forced to resign by the Intergovernmental Panel on Climate Change (IPCC). “It appears the IPCC gatekeepers have once again put pressure on a journal for daring to publish anything that might hurt the IPCC's politically immovable position that climate change is almost entirely human-caused. I can see no other explanation for an editor resigning in such a situation.”
3

Wagner insists that he's never met anyone from the IPCC, was never contacted by the organization, and certainly never received any pressure. He's a physicist and a teacher. His expertise is in detecting soil moisture using satellites (which, in practice, is even harder than it sounds). This is not to say he didn't make a mistake. He overextended himself and didn't appreciate that certain voices would rise to champion a discredited methodology as forcefully as one might defend family, country, or love of God. “It's very strange for us here. Climate change is not a controversial subject for us in Central Europe,” says Wagner. “I was shocked.”

Meteorological forecasting was not only the first big data problem but the challenge that actually gave rise to the computer as we understand it today. Therefore, the question of what we can infer about the future from vast amounts of computational data is tied to the problem of predicting the effects of climate change. It would seem that if science can't solve this problem, it should give up the entire endeavor of trying to predict anything but the results of lab experiments. So why, in spite of all our other successes science has made in big data aided computational prediction, are scientists such as Wagner still getting death threats in the mail? Why, indeed, are we still debating this stuff? Why can't we get it right once and for all?

The Memories of Tubes

The date is June 4, 1944. The setting is Southwick House, the tactical advance headquarters of the Allied forces, just outside Portsmouth, England. The day before was bright and clear, but clouds are moving down from Nova Scotia under the cover of night. The
next month will bring with it uncharacteristically high winds and much April-like weather.

General Dwight Eisenhower had been preparing for the D-day invasion until just a few hours ago when his head of meteorology, British group captain James Stagg, informed him that the seas would be too rough for an invasion the next day. Eisenhower is now faced with the prospect of postponing the assault until June 19 and quite possibly losing the element of surprise.

At 4:15
A.M.
, Eisenhower assembles his top fifteen commanders. Stagg, this time, has better news. Having spoken with one of his meteorologists, a Norwegian forecaster named Sverre Petterssen, Stagg is now convinced the Allies will have a very brief window on June 6 to stage their assault on the beach at Normandy.

Petterssen was a devotee of the relatively new Bergen school of meteorology, which held that weather was influenced by masses of cold and warm air, or “fronts.” These fronts collided miles above the earth's surface. According to the Bergen school, the fluid dynamics of these fronts, the density, the water content, the velocity of their movement, et cetera, when properly observed would yield a more accurate forecast than previous, intuitive, historical methods of forecasting (e.g., the simple statistical method that resulted in the original June 5 forecast). Petterssen calculated that a movement of the storm front eastward would result in a brief break in the wind and rain.

When Stagg informs Eisenhower of this, the general takes less than thirty seconds to reach a new course of action. “Okay,” he says. “Let's go.”
4

James Fleming describes this decision in the
Proceedings of the International Commission on History of Meteorology
as a pivotal moment in the turning of the war:

Ironically, the German meteorologists, aware of new storms moving in from the North Atlantic, had decided that the weather would be much too bad to permit an invasion attempt. The Germans were caught completely off guard. Their high command had relaxed and many officers were
on leave; their airplanes were grounded; their naval vessels absent.
5

The lesson to future military leaders from Eisenhower's success was clear: better weather data, and better forecasting, were the difference between victory and defeat.

Skip ahead about a year. It is the fall of 1945 when Hungarian-born mathematician John von Neumann arrives at the office of Admiral Lewis Strauss in Washington, D.C. Von Neumann had been working on the development of the atomic bomb and was rapidly becoming one of the most influential technological minds in government. The point of the visit to Strauss was to request $200,000 for an extremely ambitious project, the construction of a machine capable of predicting the weather.

Accompanying von Neumann is Vladimir Zworykin, an engineer at RCA who had been instrumental in turning vacuum tubes into objects that could store information in an extremely compact form. These would be the essential components in the proposed weather prediction machine.

“It should be possible to assemble electronic memory tubes in a circuit so that enormous amounts of information could be contained,” Zworykin told Strauss during the meeting, a feat that had already been demonstrated in a few prototype machines.
6
“This information would consist of observations on temperature, humidity, wind direction and force, barometric pressure, and many other meteorological facts at various points on the earth's surface and at selected elevations above it.”

At the time, no computer was capable of holding enough information or executing the calculations necessary for weather prediction. Indeed, there was really no such thing as what we consider to be a computer
at all. The closest thing was the U.S. Army's Electronic Numerical Integrator and Computer (ENIAC). The ENIAC's user interface was a constellation of dials that had to be reconfigured for every new problem. It took up a ridiculous 1,800 square feet of space, held 17,468 very breakable vacuum tubes, and required
160 kilowatts of electrical power to operate, enough juice to power about fifty houses.
7

Von Neumann wanted to develop a machine that could compute a wider set of problems than the ENIAC and do so without programmers having to extensively rework it. This
automatic
computer would have a memory that could be accessed and used without a lot of changes, which he called the stored-program concept. The idea would later make possible the random access memory (RAM) functioning that is the very backbone of every home computer, every smartphone, every server, and the entire digital universe.

Von Neumann didn't anticipate that this idea would transform the world completely and totally. In 1945 he knew only that such a device would need to have some sort of military application if he was to get any money to build it. Weather forecasting lent itself perfectly to this because it had recently proven a decisive element in the D-day invasion.

But long-range weather prediction was not von Neumann's ultimate goal. What he was really after was a new kind of weapon, one of greater strategic advantage than any nuclear bomb. Predicting the weather, von Neumann believed, was the first step in
controlling
it.

The idea was spelled out in full in Zworykin's
Outline of Weather Proposal
, which stated: “The eventual goal to be attained is the international organization of means to study weather phenomena as global phenomena and to
channel the world's weather
, as far as possible, in such a way as to minimize the damage of catastrophic disturbances and otherwise to benefit the world to the greatest extent by improved climatic conditions where possible” (emphasis added).

In broaching this fantastical idea, Zworykin found an eager partner in von Neumann who wrote in his cover letter on the proposal (dated October 24, 1945), “I agree with you completely . . . this would provide a basis for scientific approach for influencing the weather.” This control would be achieved by perfectly timed and calculated explosions of energy. “All stable processes we shall predict. All unstable processes we shall control.” In weather control, meteorology had a new goal worthy of its greatest efforts.
8

The meeting in Strauss's office was a success. And so in 1946 the Institute for Advanced Studies project was born with the financial help of the U.S. Navy and Air Force.

Two years would pass before von Neumann was able to complete his computer (a team from the University of Manchester would go on to create the first stored-program device far sooner). In the meantime, the institute used the ENIAC to make its weather calculations. The ENIAC was off-line as often as it was online and could only forecast at a very slow rate. If it was Tuesday and you wanted a forecast for the following day, you wouldn't get your forecast until Wednesday had arrived. But the meteorologists von Neumann brought to the project did have a remarkable and historic success; using the ENIAC they proved that it was indeed possible to predict the weather mechanically. The team was able to predict climate conditions three miles above the earth's surface with realistic accuracy. They soon turned their attention to the problem of improving the models and adding computational power to extend the forecast range and decrease the amount of time required to make projections.

For von Neumann, the progress was plodding. He wasn't satisfied with daily, weekly, or monthly forecasts. He wanted to be able to run a simulation or climate model and come up with a snapshot of the weather at
any
future time. This
“infinite forecast” would reduce the stratospheric inner workings of air, water, and heat to discernible causal relationships, like the workings of a clock.

In 1954 he found encouragement in the work of Norman Phillips, a meteorologist who was experimenting with what could be called the first true climate model capable of making reasonable forecasts of troposphere activity thirty days into the future.

Von Neumann hosted a conference in Princeton, New Jersey, in October 1955 to discuss the importance of Phillips's work. This meeting morphed into perhaps the first major global warming summit, replete with all the discord later climate change conferences would hold. In
Turing's Cathedral
, British historian George Dyson's
historical account of the field of computer science, Dyson recounts some of what went on:

Consideration was given to the theory that the carbon dioxide content of the atmosphere has been increasing since the beginning of the industrial revolution, and that this increase has resulted in a warming of the atmosphere since that time . . . Von Neumann questioned the validity of this theory, stating that there is reason to believe that most of the industrial carbon dioxide introduced into the atmosphere must already have been absorbed by the ocean.
9

BOOK: The Naked Future
8.19Mb size Format: txt, pdf, ePub
ads

Other books

Stepbrother Want by Tess Harper
Sharp Edges by Middleton, K. L.
Irish Aboard Titanic by Senan Molony
The Naked Communist by W. Cleon Skousen
Salvation by Igni, Aeon
Callahan's Crosstime Saloon by Spider Robinson
Elysian Dreams by Marie Medina
Summer Secrets by Freethy, Barbara