Authors: Patrick Tucker
The army had a lot of data on Hasan, much of which could have yielded clues to his intentions. The problem was that the army has a lot of data on everybody in the army. Some sixty-five thousand personnel were stationed at Fort Hood alone. The higher-ups soon realized that if they were to screen every e-mail or text message between soldiers and their correspondents for signs of future violence, it would work out to 14,950,000 people and 4,680,000,000 potential messages. Valuable warning signs of future insider threats were contained in those messages, which had been exchanged on systems and devices to which the army had access. But it was too much data for any human team to work through.
Not long after Fort Hood, army private Bradley Manning was arrested for giving confidential material to the Web site WikiLeaks, material that showed the United States was involved in killing civilians in Iraq. President Obama, who has proven to be exceedingly hard on whistle-blowing, responded with Executive Order 13587, which established an Insider Threat Task Force and mandated that the NSA and DOD each set up their own insider threat program.
12
DARPA issued a broad agency announcement on October 22, 2010, indicating that it was looking to develop a technology it called Anomaly Detection at Multiple Scales (ADAMS).
13
The goal of this program is to “create, adapt and apply technology to the problem of anomaly characterization and detection in massive data sets . . . The focus is on malevolent insiders that started out as âgood guys.' The specific goal of ADAMS is to detect anomalous behaviors before or shortly after they turn,” to train a computer system to detect the subtle signals of intent in e-mails and text messages of the sort that might have stopped the Fort Hood disaster, the Bradley Manning disclosure of classified information to WikiLeaks, or the Edward Snowden leak to the
Guardian
newspaper.
Varying bodies have differing definitions of what constitutes an “insider” in a military context but most agree that an insider is anyone with authorized access to any sensitive information that could be used against U.S. interests if disclosed improperly. What constitutes that sensitive information is a rather open-ended question but we know that it extends beyond the files, reports, or data has been officially labeled top secret.
14
For instance, the 2013 disclosures about the NSA PRISM system showed that several prominent Silicon Valley companies were forced to comply with NSA programs and orders from the secret Foreign Intelligence Surveillance Act (FISA) court. In that instance, an insider would include not just government workers or government contractors such as Edward Snowden but also any person at any of those private companies such as Google, Facebook, or Microsoft who simply knew of the existence of particular FISA orders.
15
That broadness in the definition of both insider and outsider information is important for anyone concerned that an insider threat program could be abused. From the perspective of an algorithm, there is no meaningful difference between someone who is inside the military, inside the TSA PreCheck program, or inside Facebook. The same methods of anomaly detection can be applied to any observable domain.
We are all insiders.
16
What are the telltale marks of a dangerous traitor? Some studies by military scholars list such seemingly benign traits as “lacks positive identity with unit or country,” “strange habits,” “behavior shifts,” and “choice of questionable reading materials,” adjectival phrases that describe virtually every American teenager since
Rebel Without a Cause
. The more provocative “stores ammunition” and “exhibits sudden interest in particular headquarters” seem of greater use but is something of a lagging indicator. And, naturally, it represents only one specific type of threat. In cyber-sabotage, attempting to gain access to a system or information source unrelated to your job function is often considered “abnormal” behavior. But how do you separate actionable abnormal from regular curiosity?
17
No one at DARPA or any of the academic teams applying for the money is eager to discuss their research with reporters. Some of the most interesting work in predicting insider threats that has been made available to the public comes from a team of researchers led by Oliver Brdiczka and several of his colleagues at PARC. Instead of trying to pin down the traits associated with a treasonous personality, they sought to create a telemetric experiment where they could actually observe the threat develop. Here's another example of a simulation that would have been extremely costly a few years ago becoming cheap and relatively easy to perform thanks to more people living more of their lives online.
18
Brdiczka and his colleagues looked at
World of Warcraft
, a massively multiplayer online game in which people develop a character, join teams called guilds, and go on quests that can last for days, during such time players effectively shun conventional hygiene practices or real-world contact with the opposite sex. Brdiczka had read the literature on how interpersonal dynamics, little exchanges between coworkers or between workers and supervisors, can predict workplace problems.
World of Warcraft
provided a perfect environment to telemetrically explore how group dynamics can turn a happy worker into a player who is willing to sabotage or steal from the members of his guild (a proxy for teammates). The researchers had each subject fill out a twenty-question survey to scan for the key personality traits of extroversion, agreeableness, conscientiousness, risk taking, and neuroticism, and they looked at the subjects' Facebook pages (and other social networking profiles) for similar personality clues. Then the researchers let them loose in the land of orcs and trolls.
Brdiczka and his team measured everything from which characters played more defensively to how quickly certain players achieved certain goals, what kinds of assignments they took, whether they gave their characters pets, how likely a subject was to shove a fellow player in front of a dragon to buy time to use a healing potion. Then they ran every verbal or text exchange between characters through a sentiment analysis algorithm to get a sense of how the subjects
were communicating. In all, they looked for sixty-eight behavioral features related to how
their players played the game. When they coupled those scores with the scores from the surveys and social network profiles, they found they could predict the players most likely to “quit” and thus sabotage their guild, within a six-month survey window,
with 89 percent accuracy.
World of Warcraft
functions as a useful proxy for all Internet interaction, and the government believes it has the right to access any of it. In 2012 the FBI created what it calls its Domestic Communications Assistance Center (DCAC) for the purpose of building back doors into the Internet and particularly into social networks, part of a sweeping Electronic Surveillance (ELSUR) Strategy. These online information collection devices join physical sensors, cameras, and scopes in the physical world and all of it contributes to an ever more revealing picture of our naked future.
We can pass legislation to keep the government from gathering information about us with certain methods, but as this surveillance infrastructure spreads and technology (particularly image recognition) improves, law enforcement won't need to use the most provocative, constitutionally questionable methods to get a credible picture of your present and future activities. When agents began putting GPS trackers on the undercarriage of cars to track the movement of suspects, they were sued and lost. Just a couple of years later the loss proved irrelevant. It turns out that tollbooth, streetlight, and security cameras all working together can track license plates across a city nearly as well as a GPS chip can broadcast a suspect's location.
While our risk of falling victim to violent crime in an American city is diminishing, the risk of being caught in the gears of an ever more powerful law and order apparatus is growing. This is not a trade-off many are willing to make. It's natural to dread a future in which our civilian power over law enforcement is diminished simply because we can't see what they can, because we have small amounts of information and they have complete recall of our digital trail. To be an uninformed populace is to be a disarmed one. When our local cops
or our national security personnel are not only better armed but also exponentially more intelligent than are we, the chances for abuse of power increases and the challenge of reforming the system becomes greater. That's either a cause of concern or not, depending on your relationship with law enforcement.
These fears reflect reality, but not completely. In truth, some of us are much better informed than others. And the primary driver of the interconnected physical world is not government but garage entrepreneurs. The bigger threat to our privacy is not Big Brother; it's us.
Remember Guardian Watch from chapter 1? It was an Internet of Things service that allowed anyone with a video phone to stream live footage of a disaster to law enforcement, first responders, and the public. Not long after developing Guardian Watch, creator Gordon Jones realized that for the service to really flourish, to save the life of someone in an emergency, or particularly stanch a disaster affecting an entire city, it had to already be on phones, a lot of them. This presented the classic social start-up catch-22: the density problem. In order for Guardian Watch to become the next Foursquare of disaster response, it had to
already be
the Foursquare of disaster response; it had to have coverage, lots of users able to supply enough information and content to keep the app relevant.
Problem: the utility of Jones's creation during an emergency is obvious to anyone who has seen the demo, but nobody joins a social network while literally running for her life. Jones realized that network growth would depend largely on people adopting the service for reasons other than disaster preparedness. He rebranded the service (at the time called 911 Observer) as an enhanced neighborhood-watch network system: immediate help in emergenciesâboth real and imagined.
His first customer was the Richland County Sheriff's Department
in Augusta, Georgia. In effect, Guardian Watch allows the department to crowd-source some of the more difficult aspects of evidence gathering.
This idea is not without precedent. One of former Texas governor Rick Perry's more creative legislative accomplishments was a program to digitally crowd-source border enforcement. The Texas Virtual Border Watch initiative enabled busybody constituents to monitor stretches of fence on the Texas-Mexico border from the comfort of their duct-taped La-Z-Boys, via live feed. The program was touted as a potential boon to taxpayers. The public was going to do for free what cost millions in pay to extra border guards.
The program failed for reasons having nothing to do with privacy and everything to do with why border patrolling is a hard job even on a good day. Watching a fence all day is
boring
. The site shut down in 2009 when, after an initial spike, traffic plummeted.
Guardian Watch allocates attentional resources to more interesting curated content, including but not limited to evidence of crime. Members can post pictures and videos from their phones into files such as “assault,” “burglary,” “domestic abuse,” even “suspicious behavior.” Much of the content uploaded thus far is of dubious value to law enforcement. One picture, marked “sexual,” appears to show a nude couple enjoying coitus in a park . . . or a beached whale in a pasture . . . or a dinosaur. Many of the videos in the “suspicious behavior” file appear to show pant legs and shoes, all clearly shot by accident.
But these are the early days for the network, which Jones has marketed very selectively and which boasted thirty-nine hundred users as of summer 2012. If Guardian Watch can attract a following and funding, and can scale up to meet demand, if all the ducks and planets align themselves to favor him, his start-up or one like it could revolutionize not only emergency response but also law enforcement. To understand this potential, simply imagine a future in which geo-tagged pictures and videoâimages captured in the moment and digitally attached to a location, time, and personâtake the place of unreliable witness testimony.
In addition to the clear privacy issues associated with this practice, there are questions of fecundity. The majority of content on social networking sites is personal and benign in nature, the daily annals of parenting and partying (sometimes both at once). A tweet or post about a suspicious person in your neighborhood is buried among a lot of other noise not relevant to law enforcement. The same problem hobbles most crime surveillance programs in urban areas. The United Kingdom has been experimenting with a camera program for years, one very similar to Texas Virtual Border Watch, but staffed by professionals who are paid to watch footage. Like the Texas program the vast majority of the footage is noise. The cost of sifting through it is high but you can automate it somewhat through algorithms. Guardian Watch represents a clear innovation in the way it enlists human beings to manually select evidence that's relevant to them.
Almost all the posted pictures pass through an intermediary before the content goes public. Jones wants to get rid of this step; a picture of a suspicious person in your neighborhood is really only valuable in real time. He also wants to further enhance the system with facial recognition capability, enabling it to tag people who show up in posted photos and videos automatically. That's fine if you trust your neighbors. But a vigilante could use real-time video of, say, a lone teen wandering nearby to quickly assemble a posse . . . or a lynch mob.
I pointed this out to Jones in the context of the Trayvon Martin case. Do we really want the George Zimmermans of the world to be more capable than they are now? He argued that the Martin case is a perfect example of the necessity of his system. Had a neighborhood resident been able to use Guardian Watch, Trayvon's father, Tracy, would have received a text or video about a suspicious person in his neighborhood in real time and seen Trayvon. He could send out a text alert to the group before George Zimmerman drew his gun.