Read The Naked Future Online

Authors: Patrick Tucker

The Naked Future (29 page)

BOOK: The Naked Future
5.97Mb size Format: txt, pdf, ePub
ads
The App That Saves You from Mugging

Crimes aren't just incidents that affect property values and insurance claims, they happen to people. Different people are in greater risk depending on various factors. These variables include circumstance: a drug kingpin versus a stay-at-home dad; situation, what you happen to be doing or are involved in at the time of the crime; and environment, where you are at what time of day.

These factors proceed upward along a continuum, says Esri's Mike King. The lower your risk, as determined by who and where
you are, the less likely you are to experience crime at the hands of a stranger.

That means that if you
do
become a victim of crime, statistically speaking, someone you know is probably the culprit. Figuring out this sort of thing is an entire subfield in criminology called victimology, the study of how victims and perpetrators relate to one another. Although it's one of the most important aspects of criminal science (and the basis of the entire
Law & Order
franchise), victimology has never been formally quantified. We know that certain people are more likely to suffer crimes than others; a stay-at-home parent is much less likely to be the victim of a stabbing than a drug dealer. We also know that some areas are more dangerous than others. But we don't have a firm sense of exactly how these variables relate. Is a parent who is for some reason in a bad part of town more likely to be stabbed than a drug dealer in a church parking lot in the suburbs? And if so, how much more likely? The last real attempt to place values on some of these variables was in 1983.
22
It's time to try again.

Certainly, not every crime, perhaps not even most crimes, will fit some neat parameterization. Yet certain aspects of King's victimology continuum (such as victim's occupation and income) coupled with environmental factors (such as location, presence or absence of trees, and time of day) could be scored with enough data. They could make their way into a formula or algorithm that would output a particular victim-index score to anyone based on who she was, what she was doing, and where. Such an index would be useful for cops looking to establish a viable suspect pool after a crime occurs. But its real utility would be for individuals who could use their victimology score to edit their behavior.

Imagine that you are about to go out at night to deposit a check. You have a usual ATM that you go to in an area that you consider safe. You look at your phone and see a score reflecting the probability of getting mugged if you go to your regular spot or if you walk as opposed to drive. The score is up from just a couple of days ago; your neighborhood isn't as safe as you believed that it was.
Your future has been revealed to you, naked and shivering. You elect to go the ATM in the morning instead and to attend the next neighborhood meeting, maybe fix that broken window down the street. No crime is committed; no one has been harmed and no one's privacy has been violated. There is no victim.

CHAPTER 11

The World That Anticipates Your Every Move

THE
year is 1992; the location, the Intel Corporation (INTC) campus of Santa Clara, California. Jeff Hawkins is giving a speech before several hundred of Intel's top managers. In the audience is Gordon Moore, the originator of Moore's law, and Andrew Grove, the CEO who transformed Intel into the most important computer parts manufacturer in the world. Hawkins arrives at Intel with a message of the future.

He tells his audience that people will soon be relying on something in their pocket as their primary computer, rather than the big desktop machines for which Intel manufactures chips. Furthermore, he says, these pocket computers will be so easy to use, so ubiquitous, and priced so affordably (he believed around $400 or so) that their popularity would dwarf that of conventional PCs.

Today, he describes the speech as the most poorly received talk he ever gave. The keynote was followed by a tense and brief Q&A session. “Usually when you give a talk you like people to say, ‘Hey, that was great. I love that idea!' I didn't get any of that. I got like,
‘Oh, well, that was interesting. I don't know if this really makes sense for us.”

Hawkins became convinced of the bright future of mobile technology after several years tinkering with what would later be known as the world's first tablet PC, the great-great-grandfather of the iPad, a machine called the GRiDPad, which had a touch screen you used via a stylus. “People loved it,” he recalls. “They were just immediately attracted to it. They went, ‘Oh, I want to do things with this.'”

At $3,000 a unit, the GRiDPad was both an expensive and power-hungry piece of equipment not ready for the consumer marketplace. But Hawkins knew there was an audience for something like the device if he could make it smaller and much cheaper. A few months prior to his Intel talk, Hawkins created a company called Palm. Four years later, he would bring out the PalmPilot, the world's first digital assistant.

Time vindicated Hawkins completely. The great giants of the 1990s desktop-computing era, Dell, Gateway, and Hewlett-Packard, are looking as dour as old gray men in last night's rumpled evening suit, whereas mobile devices now comprise more than 61 percent of all the computers shipped.
1
None of the apps described in this book, either those real or those imagined, would be possible without Hawkins's insight that computers would become pocket-size and that computing would become something people did not just in labs, offices, or desks, but as they went about their lives. The mobile future that Hawkins saw decades ago is the origin of the naked future of today.

In 2012 I go to meet him at the headquarters of his company, Numenta (renamed Grok in 2013). The start-up, situated beside the Caltrain tracks in downtown Redwood, California, shares an office building with social networking company Banjo. It's a modest, even shabby office compared with the enormous campuses of Google, Facebook, and Apple that sit a few miles south on Route 82. A foosball table stands beyond the reception area where a scoreboard indicates that Jeff Hawkins, neuroscientist, inventor,
reluctant futurist, is also dominating the rankings. “My team is playing a joke,” says Hawkins. “I think I'm actually in last place.” He plays only very rarely, when friends come to town. On the day I meet him, he has a match scheduled with Richard Dawkins in the afternoon.

The recently renamed company was founded in 2005 but didn't release its core product until 2012, a century in Silicon Valley years. Grok is a name taken from Robert A. Heinlein's 1961 novel
Stranger in a Strange Land
. It refers to a kind of telepathic commingling of thoughts, feelings, and fears: “Smith had been aware of the doctors but grokked that their intention was benign.” The Grok is a cloud-based service that finds patterns in streaming (telemetric) data and uses those patterns to output continuous predictions.

Grok functioning is modeled on the neocortex. It's a hierarchical learning system made of columns and cells in the same way that the “new” part of the brain is made of neurons that connect to one another in layers, and then connect to neurons above or below.

What is the neocortex? It's the route of higher-order reasoning and decision making and the last part of the brain to evolve, hence the prefix “neo” (new). This new brain is present in all mammals but is particularly developed in humans. It doesn't look like much by itself, having no distinct shape. It's really more like a coat worn over the central brain. Because it's not well defined, the neocortex doesn't lend itself to easy summary. It's involved in too many different processes for that. But its composition is miraculous.

If you were to lay it out flat on a table and cut it open, you would discover six different types of neurons stacked on top of one another. They form a hierarchy much like the organizational chart of any company hierarchy. The bottom layer offices of Neocortex Inc. house the customer service reps; they collect customer feedback on a microscale. There are a lot of these types of neurons down there gathering data continuously from our hands, eyes, ears, and skin, typing it up, and sending it upstairs to the second floor where a different, less numerous set of neurons whittle down all the information they receive and pass the message upward again. This process
repeats until the message hits the sixth floor, the executive offices. Up there, the top-level neurons are tasked with making decisions based on lots of incomplete data. It's a patchwork of reports, sensations, and impressions, a very incomplete picture of what's going on externally. The top-order neurons have to turn this info into something comprehensible and then send out an executable command in response. In order to get the picture to make sense, they have to complete it. To do that, our higher-order neurons draw from a storehouse of previously lived experience. They use that to extrapolate a pattern containing bits of recent external stimuli from the present, pieces already committed to working, and long-term memory from the past. That pattern is essentially a guess about what happens next, a prediction, a strange mix of fact and fiction.

“The brain uses vast amounts of memory to create a model of the world,” Hawkins writes in his seminal 2004 book on the subject,
On Intelligence
. “Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence.”
2
In proposing this theory, Hawkins has also given rise to a new notion of prediction as a mental process that forms the very basis of what makes us human.

Each piece of data that enters the Grok system is processed by a different cell depending on its value and where it occurs in the data sequence. When data reaches a cell, let's call it cell A, the cell goes into an “active state” and establishes connections to other cells nearby—cell B if you will. So when cell A becomes active again, cell B enters a “predictive” state. If cell B becomes active, it establishes connections. If cell B becomes active again, the cells it establishes connections to enter a predictive state, and so on.

The operation of Grok is a bit less like a corporation, more like the old game of Battleship. You begin the game knowing only that your opponent's ships are somewhere on a board that looks like yours, so you call out coordinates. Scoring a direct hit gives you a hint about which nearby square to target next. If you hit the right
sequence, you sink your opponent's battleship. Hawkins calls these little patterns “sparse distributed representations.” They are, he says, how the brain goes about turning memories into future expectations. But in the brain these patterns are incredibly complex. The smell of Chanel No. 5 combined with a sound of distant chatter can trigger the memory of a particular woman in a black dress, moving through a crowded room. She's at a cocktail party. It's New Year's Eve. She is moving toward you and suddenly you look up, expecting to see her. Perhaps she is there again. Perhaps not. But someone is there now, someone wearing Chanel No. 5.

The Grok algorithm will not form unrequited longings toward women on New Year's Eve but its learning style is surprisingly human in comparison even to other neural networks. Grok experiences data sequentially, much the way humans do, as opposed to in batches, which is how most of our computer programs absorb information. The human brain simply can't use singular, static bits of data. We need a continuous stream. We don't recognize the notes of a song until we know exactly the order in which they follow one another as well as the tempo. We don't know we're touching alligator skin until our fingertips perceive the rough ridges rising and falling in quick succession. When looking at a picture, we may have an immediate recollection of the object depicted, but we won't know how it moves, what it does, its place in the world, until we see the object hurtling through time.

Grok, similarly, adjusts its expectations as quickly as it receives new information, without need of a human programmer to manually enter that data into the model. It's almost alive, in a somewhat primordial sense, constantly revising its understanding of the world based on a stream of sensory input. Says Hawkins, “That's the only way you're going to catch the changes as they occur in the world.” He believes that hierarchical and sequential memory are the two most important elements in a truly humanistic artificial intelligence.

So far, Grok has performed a couple of proof-of-concept trials. The program is helping a big energy company understand what
demand on the system will be every two hours, based on data that comes in minute by minute. They're working with a company that runs display ads. The client here is looking to figure out what different ad networks will soon be charging for cost per impression. This will allow the client to better plan when to run which ads. Grok is helping a windmill company in Germany predict when their machines will need repair. These are all rapid-fire predictions looking at the near future.

“That's exactly what brains do,” says Hawkins. “[There are] several million, high velocity data streams coming into your brain. The brain has to, in real time, build a model of the data and then make predictions and act on the data. And that's what we need to do.”

One consequence of this just-in-time predictive capability is that Grok doesn't retain all the data it receives. Data has a half-life. It's most valuable the moment it springs into being and depreciates exponentially; the faster the data stream, the shorter the half-life.

After millennia of rummaging about in the dark, creating signals that were lost to the ether the moment we brought them into existence, we've finally developed the capacity to form a permanent record of feelings, behaviors, and movements in a way that just a few years ago would have seemed impossible. We've developed the superpower of infinite memory. But in order to actually use it in a world where everything that was once noise is becoming signal, we must teach our machines how to forget.

“There's this perfect match between what brains do and what the world's data is going to be,” says Hawkins. It's a point about which he's unequivocal. “Streaming is the future of data. It's not storing data someplace. And that's exactly what brains do.”

What Hawkins doesn't mention, but very much knows, is that while it might have taken him decades to create a computer program capable of predicting the future, it took humanity a much longer time to develop the same capability.

Like any organic adaptive progression, human evolution was a clumsy and random process. Some 500 million years ago our pre-amphibian ancestors, inhabitants of the sea, were possessive of 100
million neurons. About 100 million years later, creatures emerged that processed sensory stimuli with but a few hundred million neurons. The tree of life diversified in branch and foliage. Competition bubbled up from the depths of the prehistoric swamp. Survival began to favor those organisms that could diversify with increasing rapidity. Great reptiles rose up and conquered the plains and jungles and these creatures, which we today condescendingly call stupid, had brains of several billion neurons. Their dominion was long and orderly compared with what ours has been since. But 80 million years after their departure, a brief intermission in terms of all that had come before, our mammalian ancestors grew, stood, developed the hominoid characteristics that we carry with us today, and evolved brains of 20 billion neurons; and this process continued to accelerate, building upon the pace it had established like an object falling through space, finally culminating 50,000 years ago in the human brain of 100 billion neurons and all the vanity, violence, wonder, delusion, and future gazing that it produces.

That saga stacks up pretty poorly to the evolution of mechanical intelligence, as documented by writer Steven Shaker in the
Futurist
magazine. The primordial computers of the 1940s were the size of houses but had only 200 or 300 bits of telephone relay storage. Fifteen years later, researchers at IBM were building machines with 100,000 bits of rotating magnetic memory, and then devices with hundreds of millions of bits of magnetic core memory just ten years after that. By 1975 many computers had core memories beyond 10 million bits. That figure increased again by a factor of ten in just ten years. By 1995 larger computer systems had reached several billion bits; and by the year 2000 it was not uncommon to see customized PCs with tens of billions of bits of random access memory.

Is computer data storage and processing in bits comparable with human information encoding in neurons? The former is electronic and travels literally at the speed of light; the latter, chemical based, is slower and more nuanced. Neurons combine and connect in ways that allow them to hold many memories at once. While we
understand how to engineer memory in mechanical systems, we still have only the faintest notion of how the brain naturally does this so much better than computer systems. This is why the similarity between what Grok does when it makes a prediction and what the human brain does with the future is so significant.

BOOK: The Naked Future
5.97Mb size Format: txt, pdf, ePub
ads

Other books

Curse of the Gypsy by Donna Lea Simpson
SCARRED by Price, Faith
A Whole Nother Story by Dr. Cuthbert Soup
Bare Trap by Frank Kane
Forged in the Fire by Ann Turnbull
Terminal by Lavie Tidhar