Read The Naked Future Online

Authors: Patrick Tucker

The Naked Future (26 page)

BOOK: The Naked Future
4.18Mb size Format: txt, pdf, ePub
ads

In 2012 Nieto represented the return to power of the Partido Revolucionario Institucional (PRI). Despite the insurgent-sounding
moniker, the PRI is very much the old-power party in Mexico, having governed the country for seventy-one years until 2000. It has long been associated with chronic corruption and even collusion with drug cartels. Nieto, a young, handsome, not conspicuously bright former governor of the state of México is seen by many as something of a figurehead for a murky, well-funded machine. Having met him I can attest that he can be very charming, smiles easily, and has a firm handshake. As a governor, he is best known for allowing a particularly brutal army assault on protestors in the city of San Salvador Atenco. The June 30 red-dot cluster over Mexico indicates a lit fuse around the topic of Nieto on Twitter.

At 11:15
P.M
., on July 1, as soon as the election is called for the PRI, the student movement group Yo Soy 132 (I Am 132) will spring into action, challenging the results and accusing the PRI of fraud and voter suppression.
17
The next month will be marked by massive protests, marches, clashes with police, and arrests. This is the future that these red dots on Ramakrishnan's monitor foretell.

The cluster in Brazil relates to a sudden rise in the use of
“país,”
“protest,”
“empres,” “ciudad,”
and
“gobiern.”
In a few days twenty-five hundred people will close the Friendship Bridge connecting the Brazilian city of Foz de Iguaçu to the Paraguayan Ciudad del Este, another episode in the impeachment drama of Paraguayan president Fernando Lugo.

As soon as clusters appear on Ramakrishnan's computer, the system automatically sends an alert to government analysts with the Intelligence Advanced Research Projects Activity (IARPA), which is funding Ramakrishnan through a program called Open Source Indicators (OSI). The program seeks to use available public data to model potential future events before they happen. Ramakrishnan and his team are one of several candidates competing for IARPA funds for further development. The different teams are evaluated monthly on the basis of what their predictions were, how much lead time the prediction provided, confidence in the prediction, and other factors.
18

The OSI program is a descendant to the intelligence practice of
analyzing “chatter,” a method of surveillance that first emerged during the cold war. U.S. intelligence agents would listen in on the Soviet military communication network for clues about impending actions or troop movements. Most of this overheard talk was unremarkable but when the amount of chatter between missile silo personnel and military headquarters increased, this indicated that a big military exercise was about to get under way.
19
This analysis was a purely human endeavor and a fairly straightforward one, with one enemy, one network to watch, and one set of events to watch out for.

In the post-9/11 world, where—we are told—potential enemies are everywhere and threats are too numerous to mention, the IARPA considers any event related to “population-level changes in communication, consumption, and movement” worthy of predicting. That could include a commodity-price explosion; a civil war; a disease outbreak; the election of a fringe candidate to an allied nation's parliament; anything that could impact either U.S. interests, security, or both. The hope is that if such events can be seen in advance, their potential impact can then be calculated, different responses can be simulated, and decision makers can then select the best action.

What this means is that the amount of
potentially
useful data has grown to encompass a far greater number of signals. For U.S. intelligence personnel, Facebook, Twitter, and other social networks now serve the role that chatter served during the cold war. But as Ramakrishnan admits, Facebook probably is not where the next major national security threat is going to pop up. So intelligence actively monitors about twenty thousand blogs, RSS feeds, and other sources of information in the same way newsroom reporters constantly watch AP bulletins and listen to police scanners to catch late-breaking developments.

In looking for potential geopolitical hot spots, researchers also watch out for many of the broken-window signals that play a role in neighborhood predictive policing, but on a global scale. The number of cars in hospital parking lots in a major city can suggest
an emerging health crisis, as can a sudden jump in school absences. Even brush clearing or road building can predict an event of geopolitical consequence.

Spend enough time on Google Maps and you can spot a war in the making.

Between January and April 2011, a group of Harvard researchers with the George Clooney–funded Satellite Sentinel Project (SSP) used publicly available satellite images to effectively predict that the Sudanese Armed Forces (SAF) were going to stage a military invasion of the disputed area of Abyei within the coming months. The giveaway wasn't tank or troop buildup on the border. Sudan began building wider, less flood-prone roads toward the target, the kind you would use to transport big equipment such as oil tankers. But there was no oil near where the SAF was working. “These roads indicated the intent to deploy armored units and other heavy vehicles south towards Abyei during the rainy season,” SSP researchers wrote in their final report on the incident.
20
True to their prediction, the SAF began burning border villages in March and initiated a formal invasion on May 19 of that year.

Correctly forecasting a military invasion in Africa used to be the sort of thing only a superpower could do; now it's a semester project for Harvard students.

Much of this data is hiding in plain sight, in reports already written and filed. In 2012 a group of British researchers applied a statistical model to the diaries and dispatches of soldiers in Afghanistan, obtained through the WikiLeaks project. They created a formula to predict future violence levels based on how troops described their firefights in their diaries. The model correctly (though retroactively) predicted an uptick in violence in 2010.
21

Simple news reports when observed on a massive scale can reveal information that isn't explicit in any single news item. As I originally wrote for the
Futurist
magazine, a researcher named Kalev Leetaru was able to retroactively pinpoint the location of Osama bin Laden within a 124-mile radius of Abbottabad, Pakistan,
where the terrorist leader was eventually found. He found that almost half of the news reports mentioning Bin Laden included the words “Islamabad” and “Peshawar,” two key cities in northern Pakistan. While only one news report mentioned Abbottabad (in the context of a terrorist player who had been captured there), Abbottabad is located easily within 124 miles of the two key cities. In a separate attempt to predict geopolitical events from news reports, Leetaru also used a thirty-year archive of global news put through a learning algorithm to detect “tone” in the news stories (the number of negatively charged words versus positively charged words) along fifteen hundred dimensions and ran the model on the Nautilus, a large, shared-memory supercomputer capable of running more than 8.2 trillion floating point operations per second. Leetaru's model also retroactively predicted the social uprisings in Egypt, Tunisia, and Libya.
22

News reports, tweets, and media tone are
correlated
with violence. Predicting the actual
cause
of violence is more difficult. Yet researchers are making progress here as well. In Libya, Tunisia, and Egypt, the price of food, as measured by the food price index of the Food and Agriculture Organization of the United Nations (FAO), clearly plays a critical role in civil unrest. In 2008 an advance in this index of more than sixty base points easily preceded a number of low-intensity “food riots.” Prices collapsed and then bounced back just before the 2011 Arab Spring events in Tunisia, Libya, and Egypt.
23

If you're a humanitarian NGO, knowing where and when civil unrest is going to strike can help you position relief resources and staff in advance. If you're a company, you can pull your business interests out of a country where the shit's about to hit the fan. But to law enforcement, predicting the time and place of an event of significance is less important than knowing who will be involved.

Unlike predicting an invasion, piecing together a model of what a particular individual will do involves a lot more variables. Not only is it more challenging technically, it's also more costly. Researchers can't just run lab experiments on who will or won't
commit a crime, so research has to take place in the real world. But experimentation here runs up against privacy concerns. In recent years researchers have found a clever way around these thorny issues by looking toward captive audiences, individuals in situations who have effectively relinquished any expectation of privacy.

CHAPTER 10

Crime: Predicting the Who

THE
date is the Wednesday before Thanksgiving 2025. The setting is Dulles International Airport. Today is the busiest travel day of the year and the airport is crowded with parents dragging children dragging stuffed animals from gate to gate. But while there is no shortage of people in the airport, a single key feature distinguishes it from a similar setting as we would encounter it today. The people aren't standing in line. Nor are they attempting the difficult task of disrobing while moving through an X-ray machine. They aren't carrying their belts or shoes or being patted down by TSA agents. They're just walking to where they need to be or else waiting for their plane to board. There seems to be no security whatsoever.

The only apparent bottleneck in the flow of traffic occurs near the entrance to the departure gates, a spot where, just a few years ago, patrons would have encountered enormous detectors of metal, gas, and powder. Instead, visitors to the future Dulles are asked to walk up to a machine, stare directly into a lens, and answer a few questions.

The visitors approach the kiosk one after another, perform the
task, and are moved quickly through . . . save one man, whose responses to the requisite questions are somehow off. The machine has not given him clearance to move forward. This man is the bottleneck. After his third attempt, two broad-shouldered TSA agents appear and stand beside the passenger.

“I think this thing is broken,” the man informs them. The agents smile, shake their heads, take the man firmly by the elbow, and lead him away. The machine is not broken. The other passengers viewing this event don't know where the man is being led and express no concern. They understand only that an inconvenience has been moved from their path. They will catch their flight. Because the secondary search area is used only in rare circumstances and there is no backlog, even the man who has been pulled away will not be delayed too long—assuming, of course, the system determines he poses no legitimate threat.

An early version of the above-described program is already in place in strategically selected airports around the country (the metal detectors have not yet been moved out). The object of this screening is not luggage, exterior clothing, or even people's bodies, but rather people's innermost thoughts.

Today's computerized lie detectors take the form of Embodied Avatar kiosks. These watch eye dilation and other factors to discern whether passengers are being truthful or deceitful. No, the kiosk isn't going to do a cavity search, but it can summon an agent if it robotically determines you're just a bit too shifty to be allowed on a plane without an interview.
1

Their functioning is based around the work of Dr. Paul Ekman, creator of one of the world's foremost experiments on lie detection, specifically how deception reveals itself through facial expression. Ekman's previous work has shown that with just a bit of training a person can learn to spot active deceit with 90 percent accuracy simply by observing certain visual and auditory cues—wide, fearful eyes and fidgeting, primarily—and do so in just thirty seconds. If you're a TSA agent and have to screen hundreds of passengers at a busy airport, thirty seconds is about as much time as
you can take to decide if you want to pull a suspicious person out of line or let her board a plane.
2

The biometric detection of lies could involve a number of methods, the most promising of which is thermal image analysis for anxiety. If you look at the heat coming off someone's face with a thermal camera, you can see large hot spots in the area around the eyes (the periorbital region). This indicates activity in the sympathetic-adrenergic nervous systems; this is a sign of fear, not necessarily lying. Someone standing in a checkpoint line with hot eyes is probably nervous about something.
3
The presence of a high degree of nervousness at an airport checkpoint could be considered enough justification for additional screening. The hope of people in the lie detection business is that very sensitive sensors placed a couple of inches away from a subject's face would provide reliable data on deception.

In 2006, the TSA began to experiment with live screeners who were being taught to examine people's facial expressions, mannerisms, and so on, for signs of lying as part of a program called SPOT (Screening Passengers by Observational Techniques).
4
,
5
When an airport-stationed police officer trained in “behavior detection” harassed King Downing, an ACLU coordinator and an African American, an embarrassing lawsuit followed. As Downing's lawyer John Reinstein told
New York Times
reporter Eric Lipton, “There is a significant prospect this security method is going to be applied in a discriminatory manner. It introduces into the screening system a number of highly subjective elements left to the discretion of the individual officer.”
6

Later the U.S. Government Accountability Office (GAO) would tell Congress that the TSA had “deployed its behavior detection program nationwide before first determining whether there was a scientifically valid basis for the program.”

DARPA's Larry Willis defended the program before the U.S. Congress, noting that “a high-risk traveler is nine times more likely to be identified using Operational SPOT versus random screening.”
7

You may feel that computerized behavior surveillance at airports is creepy, but isn't a future where robots analyze our eye
movements and face heat maps to detect lying preferable to one where policemen make inferences about intent on the basis of what they see? And aren't both of these methods, cop and robot, better than what we've got, a system that will deny someone a seat on a plane because her name bears a slight similarity to that of someone on a watch list? Probably the worst aspect of our airport security system as it currently exists is that evidence suggests we're not actually getting the security we think we are. As I originally wrote for the
Futurist
, recent research suggests that ever more strict security measures in place in U.S. airports are making air travel
less
safe and airports more vulnerable. So much money is spent screening passengers who pose little risk that it's hurting the TSA's ability to identify real threats, according to research from University of Illinois mathematics professor Sheldon H. Jacobson. Consider that for a second. We've finally reached a point where a stranger with a badge can order us to disrobe . . . in public . . . while we're walking . . . we accept this without the slightest complaint . . .
and it's not actually making us any safer.

Our present system cannot endure forever. We won't be X-raying our shoes ten years from now. But what will replace it? What is the optimal way of making sure maniacs can't destroy planes while also keeping intercontinental air traffic on schedule?

The best solution, Jacobson's research suggests, is to separate the relatively few high-risk passengers from the vast pool of low-risk passengers long before anybody approaches the checkpoint line. The use of passenger data to separate the sheep from the goats would shorten airport screening lines, catch more threats, and improve overall system efficiency. To realize those three benefits we will all be asked to give up more privacy. We'll grumble at first, write indignant tweets and blog posts as though George Orwell had an opinion on the TSA, but in time we will exhaust ourselves and submit to predictive screening in order to save twenty minutes here or there. Our surrender, like so many aspects of our future, is already perfectly predictable. Here's why:

Our resistance to ever more capable security systems originates
from a natural and appropriate suspicion of authority but also the fear of being found guilty of some trespass we did not in fact commit, of becoming a “false positive.” This fear is what allows us to sympathize with defendants in a courtroom setting, and indeed, with folks who have been put on the wrong watch list and kept off an aircraft through no fault of their own. In fact, the entire functioning of our criminal justice system depends on all of us, as witnesses, jury members, and taxpayers, caring a lot about false positives. As the number of false positives decreases, our acceptance of additional security actually grows.

Convicting the wrong person for a crime is a high-cost false positive (often of higher cost than the crime). Those costs are borne mostly by the accused individual but also by society. Arresting an innocent bystander is also high cost, but less so. Relatively speaking, pulling the wrong person out of a checkpoint line for additional screening has a low cost but if you do it often enough, the costs add up. You increase wait time for everyone else (as measured by time that could be spent doing something else), and as Jacobson's model shows, it serves to erode overall system performance very quickly.

Now here's the tyranny of numbers: decrease the number of high-cost false positives and you can afford to make more premature arrests; bring
that
number down and you can afford more stop-and-frisks or security-check pat downs. Bring that number down again and the balance sheet looks like progress. The average citizen knows only that the system is improving. The crime rate appears to be going down; the security line at the airport seems to be moving faster. Life is getting better. If cameras, robots, big data systems, and predictive analytics played a part in that, then we respond by becoming more accepting of robots, cameras, and systems quantifying our threat potential when we're about to get on a plane. We grow more accustomed to surveillance in general, especially when submitting to extra surveillance has a voluntary component, one that makes submission convenient and resistance incredibly inconvenient. This is why, in the week following the disclosure of the massive NSA metadata surveillance program, a
majority (56 percent) of Americans polled by Pew said they believed the tactics that the NSA was employing were acceptable. That's astounding considering that at the time, the media narrative was running clearly in the opposite direction.
8

Another example of the opt-in surveillance state is the TSA's PreCheck program, which expedites screening for eligible passengers by rating their risk against that of the entire flying population. In order to be eligible for PreCheck, you're required to give the Department of Homeland Security a window into your personal life, including where you go, your occupation, your green card number if you're a legal alien, your fingerprints, and various other facts and tidbits of the sort that you could be forgiven for assuming the TSA had already (certainly the IRS has a lot of it). It's not exactly more invasive than a full body scan but in many respects it is more personal. Homeland Security uses the information it gets to calculate the probability that you might be a security threat. If you, like most people in the United States, are a natural-born citizen and don't have any outstanding warrants, you're not a big risk.

People who use TSA PreCheck compare it with being back in a simpler and more innocent time. But there's a downside, just as there is with customer loyalty programs at grocery stores. Programs such as PreCheck make a higher level of constant surveillance acceptable to more people. Suddenly, individuals who don't want to go along look extra suspicious. By definition, they are abnormal.

Security becomes faster, more efficient, and more effective through predictive analytics and automation so you should expect to be interacting with predictive screeners in more places beyond the X-ray line at the airport. But for computer programs, clearing people to get on a plane isn't as clear as putting widgets in a box. Trained algorithms are more sensitive than an old-school southern sheriff when it comes to what is “abnormal.” Yet when a deputy or state trooper questions you on the side of the road, he knows only as much about you as he can perceive with his eyes, his ears, and his nose (or perhaps his dog's nose if his dog's at the border). Because the digital trail we leave behind is so extensive, the potential reach of these programs is far greater.
And they're sniffing you already. Today, many of these programs are already in use to scan for “insider threats.” If you don't consider yourself an insider, think again.

Abnormal on the “Inside”

The location is Fort Hood, Texas. The date is November 5, 2009. It is shortly after 1
P.M
.

Army psychiatrist Major Nidal Hasan approaches the densely packed Soldier Readiness Processing Center where hundreds of soldiers are awaiting medical screening. At 1:20
P.M
., Hasan, who is a Muslim, bows his head and utters a brief Islamic prayer. Then he withdraws an FN Hertsal 5.7 semiautomatic pistol (a weapon he selected based on the high capacity of its magazine) and another pistol.
9

As he begins firing, the unarmed soldiers take cover. Hasan discharges the weapon methodically, in controlled bursts. Soldiers hear rapid shots, then silence, then shots. Several wounded men attempt to flee from the building and Hasan chases them. This is how thirty-four-year-old police sergeant Kimberly D. Munley encounters him, walking quickly after a group of bleeding soldiers who have managed to make it out of the center. Hasan is firing on them as though shooting at a coven of quail that has jumped up from a bluff of tall grass. Munley draws her gun and pulls the trigger. Hasan turns, charges, fires, and hits Munley in the legs and wrists before she lands several rounds in his torso and he collapses. The entire assault has lasted seven minutes and has left thirteen dead, thirty-eight wounded.
10

Following the Fort Hood incident, the Federal Bureau of Investigation, Texas Rangers, and U.S. chattering classes went about the usual business of disaster forensics, piecing together (or inventing) the hidden story of what made Hasan snap, finding the “unmistakable” warning signs in Hasan's behavior that point to the crime he was about to commit. After systematic abuse from other soldiers Hasan had become withdrawn. He wanted out of the military but
felt trapped. Some of Hasan's superiors had pegged him as a potential “insider threat” years before the Hood shootings, but when they reported their concerns, nothing came of it. The biggest warning signal sounded in the summer of 2009 when Hasan went out shopping for very specific and nonstandard-issue firearms.
11

BOOK: The Naked Future
4.18Mb size Format: txt, pdf, ePub
ads

Other books

One True Heart by Jodi Thomas
Playing at Love by Ophelia London
Mothers Affliction by Carl East
Knives at Dawn by Andrew Friedman
A Promise Worth Keeping by Faria, Cyndi
A Rare Gift by Jaci Burton