Authors: Michio Kaku
Emotional robots could also be a matter of life and death. In the future, scientists may be able to create rescue robots—robots that are sent into fires, earthquakes, explosions,
etc.
They will have to make thousands of value judgments about who and what to save and in what order. Surveying the devastation all around them, they will have to rank the various tasks they face in order of priority.
Emotions are also essential if you view the evolution of the human brain. If you look at the gross anatomical features of the brain, you notice that they can be grouped into three large categories.
First, you have the reptilian brain, found near the base of the skull, which makes up most of the brain of reptiles. Primitive life functions, such as balance, aggression, territoriality, searching for food, etc., are controlled by this part of the brain. (Sometimes, when staring at a snake that is staring back at you, you get a creepy sensation. You wonder, What is the snake thinking about? If this theory is correct, then the snake is not thinking much at all, except whether or not you are lunch.)
When we look at higher organisms, we see that the brain has expanded toward the front of the skull. At the next level, we find the monkey brain, or the limbic system, located in the center of our brain. It includes components like the amygdala, which is involved in processing emotions. Animals that live in groups have an especially well-developed limbic system. Social animals that hunt in groups require a high degree of brainpower devoted to understanding the rules of the pack. Since success in the wilderness depends on cooperating with others, but because these animals cannot talk, it means that these animals must communicate their emotional state via body language, grunts, whines, and gestures.
Finally, we have the front and outer layer of the brain, the cerebral cortex, the layer that defines humanity and governs rational thought. While other animals are dominated by instinct and genetics, humans use the cerebral cortex to reason things out.
If this evolutionary progression is correct, it means that emotions will play a vital role in creating autonomous robots. So far, robots have been created that mimic only the reptilian brain. They can walk, search their surroundings, and pick up objects, but not much more. Social animals, on the other hand, are more intelligent than those with just a reptilian brain. Emotions are required to socialize the animal and for it to master the rules of the pack. So scientists have a long way to go before they can model the limbic system and the cerebral cortex.
Cynthia Breazeal of MIT actually created a robot specifically designed to tackle this problem. The robot is KISMET, with a face that resembles a mischievous elf. On the surface, it appears to be alive, responding to you with facial motions representing emotions. KISMET can duplicate a wide range of emotions by changing its facial expressions. In fact, women who react to this childlike robot often speak to KISMET in “motherese,” what mothers use when talking to babies and children. Although robots like KISMET are designed to mimic emotions, scientists have no illusion that the robot actually feels emotions. In some sense, it is like a tape recorder programmed not to make sounds, but to make facial emotions instead, with no awareness of what it is doing. But the breakthrough with KISMET is that it does not take much programming to create a robot that will mimic humanlike emotions to which humans will respond.
These emotional robots will find their way into our homes. They won’t be our confidants, secretaries, or maids, but they will be able to perform rule-based procedures based on heuristics. By midcentury, they may have the intelligence of a dog or cat. Like a pet, they will exhibit an emotional bond with their master, so that they will not be easily discarded. You will not be able to speak to them in colloquial English, but they will understand programmed commands, perhaps hundreds of them. If you tell them to do something that is not already stored in their memory (such as “go fly a kite”), they will simply give you a curious, confused look. (If by midcentury robot dogs and cats can duplicate the full range of animal responses, indistinguishable from real animal behavior, then the question arises whether these robot animals feel or are as intelligent as an ordinary dog or cat.)
Sony experimented with these emotional robots when it manufactured the AIBO (artificial intelligence robot) dog. It was the first toy to realistically respond emotionally to its master, albeit in a primitive way. For example, if you pet the AIBO dog on its back, it would immediately begin to murmur, uttering soothing sounds. It could walk, respond to voice commands, and even learn to a degree. AIBO cannot learn new emotions and emotional responses. (It was discontinued in 2005 due to financial reasons, but it has since created a loyal following who upgrade the computer’s software so AIBO can perform more tasks.) In the future, robotic pets that form an emotional attachment to children may become common.
Although these robot pets will have a large library of emotions and will form lasting attachments with children, they will not feel actual emotions.
By midcentury, we should be able to complete the next milestone in the history of AI: reverse engineering the human brain. Scientists, frustrated that they have not been able to create a robot made of silicon and steel, are also trying the opposite approach: taking apart the brain, neuron by neuron—just like a mechanic might take apart a motor, screw by screw—and then running a simulation of these neurons on a huge computer. These scientists are systematically trying to simulate the firings of neurons in animals, starting with mice, cats, and going up the evolutionary scale of animals. This is a well-defined goal, and should be possible by midcentury.
MIT’s Fred Hapgood writes, “Discovering how the brain works—
exactly
how it works, the way we know how a motor works—would rewrite almost every text in the library.”
The first step in the process of reverse engineering the brain is to understand its basic structure. Even this simple task has been a long, painful process. Historically, the various parts of the brain were identified during autopsies, without a clue as to their function. This gradually began to change when scientists analyzed people with brain damage, and noticed that damage to certain parts of the brain corresponded to changes in behavior. Stroke victims and people suffering from brain injuries or diseases exhibited specific behavior changes, which could then be matched to injuries in specific parts of the brain.
The most spectacular example of this was in 1848 in Vermont, when a 3-foot, 8-inch-long metal rod was driven right through the skull of a railroad foreman named Phineas Gage. This history-making accident happened when dynamite accidentally exploded. The rod entered the side of his face, shattered his jaw, went through his brain, and passed out the top of his head. Miraculously, he survived this horrendous accident, although one or both of his frontal lobes were destroyed. The doctor who treated him at first could not believe that anyone could survive such an accident and still be alive. He was in a semiconscious state for several weeks, but later miraculously recovered. He even survived for twelve more years, taking odd jobs and traveling, dying in 1860. Doctors carefully preserved his skull and the rod, and they have been intensely studied ever since. Modern techniques, using CT scans, have reconstructed details of this extraordinary accident.
This event forever changed the prevailing opinions of the mind-body problem. Previously, it was believed even within scientific circles that the soul and the body were separate entities. People wrote knowingly about some “life force” that animated the body, independent of the brain. But widely circulated reports indicated that Gage’s personality underwent marked changes after the accident. Some accounts claim that Gage was a well-liked, outgoing man who became abusive and hostile after the accident. The impact of these reports reinforced the idea that specific parts of the brain controlled different behaviors, and hence the body and soul were inseparable.
In the 1930s, another breakthrough was made when neurologists like Wilder Penfield noticed that while performing brain surgery for epilepsy sufferers, when he touched parts of the brain with electrodes, certain parts of the patient’s body could be stimulated. Touching this or that part of the cortex could cause a hand or leg to move. In this way, he was able to construct a crude outline of which parts of the cortex controlled which parts of the body. As a result, one could redraw the human brain, listing which parts of the brain controlled which organ. The result was a homunculus, a rather bizarre picture of the human body mapped onto the surface of the brain, which looked like a strange little man, with huge fingertips, lips, and tongue, but a tiny body.
More recently, MRI scans have given us revealing pictures of the thinking brain, but they are incapable of tracing the specific neural pathways of thought, perhaps involving only a few thousand neurons. But a new field called optogenetics combines optics and genetics to unravel specific neural pathways in animals. By analogy, this can be compared to trying to create a road map. The results of the MRI scans would be akin to determining the large interstate highways and the large flow of traffic on them. But optogenetics might be able to actually determine individual roads and pathways. In principle, it even allows scientists the possibility of controlling animal behavior by stimulating these specific pathways.
This, in turn, generated several sensational media stories. The Drudge Report ran a lurid headline that screamed, “Scientists Create Remote-Controlled Flies.” The media conjured up visions of remote-controlled flies carrying out the dirty work of the Pentagon. On the
Tonight Show,
Jay Leno even talked about a remote-controlled fly that could fly into the mouth of President George W. Bush on command. Although comedians had a field day imagining bizarre scenarios of the Pentagon commanding hoards of insects with the push of a button, the reality is much more modest.
The fruit fly has roughly 150,000 neurons in the brain. Optogenetics allows scientists to light up certain neurons in the brains of fruit flies that correspond to certain behaviors. For example, when two specific neurons are activated, it can signal the fruit fly to escape. The fly then automatically extends its legs, spreads its wings, and takes off. Scientists were able to genetically breed a strain of fruit flies whose escape neurons fired every time a laser beam was turned on. If you shone a laser beam on these fruit flies, they took off each time.
The implications for determining the structure of the brain are important. Not only would we be able to slowly tease apart neural pathways for certain behaviors, but we also could use this information to help stroke victims and patients suffering from brain diseases and accidents.
Gero Miesenböck of Oxford University and his colleagues have been able to identify the neural mechanisms of animals in this way. They can study not only the pathways for the escape reflex in fruit flies but also the reflexes involved in smelling odors. They have studied the pathways governing food-seeking in roundworms. They have studied the neurons involved in decision making in mice. They found that while as few as two neurons were involved in triggering behaviors in fruit flies, almost 300 neurons were activated in mice for decision making.
The basic tools they have been using are genes that can control the production of certain dyes, as well as molecules that react to light. For example, there is a gene from jellyfish that can make green fluorescent protein. Also, there are a variety of molecules like rhodopsin that respond when light is shone upon them by allowing ions to pass through cell membranes. In this way, shining light on these organisms can trigger certain chemical reactions. Armed with these dyes and light-sensitive chemicals, these scientists have been able for the first time to tease apart neural circuits governing specific behaviors.
So although comedians like to poke fun at these scientists for trying to create Frankenstein fruit flies controlled by the push of a button, the reality is that scientists are, for the first time in history, tracing the specific neural pathways of the brain that control specific behaviors.
Optogenetics is a first, modest step. The next step is to actually model the entire brain, using the latest in technology. There are at least two ways to solve this colossal problem, which will take many decades of hard work. The first is by using supercomputers to simulate the behavior of billions of neurons, each one connected to thousands of other neurons. The other way is to actually locate every neuron in the brain.
The key to the first approach, simulating the brain, is simple: raw computer power. The bigger the computer, the better. Brute force, and inelegant theories, may be the key to cracking this gigantic problem. And the computer that might accomplish this herculean task is called Blue Gene, one of the most powerful computers on earth, built by IBM.
I had a chance to visit this monster computer when I toured the Lawrence Livermore National Laboratory in California, where they design hydrogen warheads for the Pentagon. It is America’s premier top-secret weapons laboratory, a sprawling, 790-acre complex in the middle of farm country, budgeted at $1.2 billion per year and employing 6,800 people. This is the heart of the U.S. nuclear weapons establishment. I had to pass through many layers of security to see it, since this is one of the most sensitive weapons laboratories on earth.
Finally, after passing a series of checkpoints, I gained entrance to the building housing IBM’s Blue Gene computer, which is capable of computing at the blinding speed of 500 trillion operations per second. Blue Gene is a remarkable sight. It is huge, occupying about a quarter acre, and consists of row after row of jet-black steel cabinets, each one about 8 feet tall and 15 feet long.
When I walked among these cabinets, it was quite an experience. Unlike Hollywood science fiction movies, where the computers have lots of blinking lights, spinning disks, and bolts of electricity crackling through the air, these cabinets are totally quiet, with only a few tiny lights blinking. You realize that the computer is performing trillions of complex calculations, but you hear nothing and see nothing as it works.