Read The Singularity Is Near: When Humans Transcend Biology Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com

The Singularity Is Near: When Humans Transcend Biology (16 page)

BOOK: The Singularity Is Near: When Humans Transcend Biology
12.03Mb size Format: txt, pdf, ePub
ads

However
, there is nonetheless a distinct limit to the complexity produced by class 4 automata. The many images of such automata in Wolfram’s book all have a similar look to them, and although they are nonrepeating, they are interesting (and intelligent) only to a degree. Moreover, they do not continue to evolve into anything more complex, nor do they develop new types of features. One could run these automata for trillions or even trillions of trillions of iterations and the image would remain at the same limited level of complexity. They do not evolve into, say, insects or humans or Chopin preludes or anything else that we might consider of a higher order of complexity than the streaks and intermingling triangles displayed in these images.

Complexity is a continuum. Here I define “order” as “information that fits a purpose.”
70
A completely predictable process has zero order. A high level of information alone does not necessarily imply a high level of order either. A phone book has a lot of information, but the level of order of that information is quite low. A random sequence is essentially pure information (since it is not predictable) but has no order. The output of class 4 automata does possess a certain level of order, and it does survive like other persisting patterns. But the pattern represented by a human being has a far higher level of order, and of complexity.

Human beings fulfill a highly demanding purpose: they survive in a challenging ecological niche. Human beings represent an extremely
intricate and elaborate hierarchy of other patterns. Wolfram regards any patterns that combine some recognizable features and unpredictable elements to be effectively equivalent to one another. But he does not show how a class 4 automaton can ever increase its complexity, let alone become a pattern as complex as a human being.

There is a missing link here, one that would account for how one gets from the interesting but ultimately routine patterns of a cellular automaton to the complexity of persisting structures that demonstrate higher levels of intelligence. For example, these class 4 patterns are not capable of solving interesting problems, and no amount of iteration moves them closer to doing so. Wolfram would counter that a rule 110 automaton could be used as a “universal computer.”
71
However, by itself, a universal computer is not capable of solving intelligent problems without what I would call “software.” It is the complexity of the software that runs on a universal computer that is precisely the issue.

One might point out that class 4 patterns result from the simplest possible cellular automata (one-dimensional, two-color, two-neighbor rules). What happens if we increase the dimensionality—for example, go to multiple colors or even generalize these discrete cellular automata to continuous functions? Wolfram addresses all of this quite thoroughly. The results produced from more complex automata are essentially the same as those of the very simple ones. We get the same sorts of interesting but ultimately quite limited patterns. Wolfram makes the intriguing point that we do not need to use more complex rules to get complexity in the end result. But I would make the converse point that we are unable to increase the complexity of the end result through either more complex rules or further iteration. So cellular automata get us only so far.

Can We Evolve Artificial Intelligence from Simple Rules?

So how do we get from these interesting but limited patterns to those of insects or humans or Chopin preludes? One concept we need to take into consideration is conflict—that is,
evolution
. If we add another simple concept—an evolutionary algorithm—to that of Wolfram’s simple cellular automata, we start to get far more exciting and more intelligent results. Wolfram would say that the class 4 automata and an evolutionary algorithm are “computationally equivalent.” But that is true only on what I consider the “hardware” level. On the software level, the order of the patterns produced are clearly different and of a different order of complexity and usefulness.

An evolutionary algorithm can start with randomly generated
potential solutions to a problem, which are encoded in a digital genetic code. We then have the solutions compete with one another in a simulated evolutionary battle. The better solutions survive and procreate in a simulated sexual reproduction in which offspring solutions are created, drawing their genetic code (encoded solutions) from two parents. We can also introduce a rate of genetic mutation. Various high-level parameters of this process, such as the rate of mutation, the rate of offspring, and so on, are appropriately called “God parameters,” and it is the job of the engineer designing the evolutionary algorithm to set them to reasonably optimal values. The process is run for many thousands of generations of simulated evolution, and at the end of the process one is likely to find solutions that are of a distinctly higher order than the starting ones.

The results of these evolutionary (sometimes called genetic) algorithms can be elegant, beautiful, and intelligent solutions to complex problems. They have been used, for example, to create artistic designs and designs for artificial life-forms, as well as to execute a wide range of practical assignments such as designing jet engines. Genetic algorithms are one approach to “narrow” artificial intelligence—that is, creating systems that can perform particular functions that used to require the application of human intelligence.

But something is still missing. Although genetic algorithms are a useful tool in solving specific problems, they have never achieved anything resembling “strong AI”—that is, aptitude resembling the broad, deep, and subtle features of human intelligence, particularly its powers of pattern recognition and command of language. Is the problem that we are not running the evolutionary algorithms long enough? After all, humans evolved through a process that took billions of years. Perhaps we cannot re-create that process with just a few days or weeks of computer simulation. This won’t work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won’t help.

A third level (beyond the ability of cellular processes to produce apparent randomness and genetic algorithms to produce focused intelligent solutions) is to perform evolution on multiple levels. Conventional genetic algorithms allow evolution only within the confines of a narrow problem and a single means of evolution. The genetic code itself needs to evolve; the rules of evolution need to evolve. Nature did not stay with a single chromosome, for example. There have been many levels of indirection incorporated in the natural evolutionary process. And we require a complex environment in which the evolution takes place.

To build strong AI we will have the opportunity to short-circuit this process, however, by reverse engineering the human brain, a project well under way, thereby benefiting from the evolutionary process that has already taken place. We will be applying evolutionary algorithms within these solutions just as the human brain does. For example, the fetal wiring is initially random within constraints specified in the genome in at least some regions. Recent research shows that areas having to do with learning undergo more change, whereas structures having to do with sensory processing experience less change after birth.
72

Wolfram makes the valid point that certain (indeed, most) computational processes are not predictable. In other words, we cannot predict future states without running the entire process. I agree with him that we can know the answer in advance only if somehow we can simulate a process at a faster speed. Given that the universe runs at the fastest speed it can run, there is usually no way to short-circuit the process. However, we have the benefits of the billions of years of evolution that have already taken place, which are responsible for the greatly increased order of complexity in the natural world. We can now benefit from it by using our evolved tools to reverse engineer the products of biological evolution (most importantly, the human brain).

Yes, it is true that some phenomena in nature that may appear complex at some level are merely the result of simple underlying computational mechanisms that are essentially cellular automata at work. The interesting pattern of triangles on a “tent olive” shell (cited extensively by Wolfram) or the intricate and varied patterns of a snowflake are good examples. I don’t think this is a new observation, in that we’ve always regarded the design of snowflakes to derive from a simple molecular computation-like building process. However, Wolfram does provide us with a compelling theoretical foundation for expressing these processes and their resulting patterns. But there is more to biology than class 4 patterns.

Another important thesis by Wolfram lies in his thorough treatment of computation as a simple and ubiquitous phenomenon. Of course, we’ve known for more than a century that computation is inherently simple: we can build any possible level of complexity from a foundation of the simplest possible manipulations of information.

For example, Charles Babbage’s late-nineteenth-century mechanical computer (which never ran) provided only a handful of operation codes yet provided (within its memory capacity and speed) the same kinds of transformations that modern computers do. The complexity of Babbage’s invention stemmed only from the details of its design, which indeed
proved too difficult for Babbage to implement using the technology available to him.

The Turing machine, Alan Turing’s theoretical conception of a universal computer in 1950, provides only seven very basic commands, yet can be organized to perform any possible computation.
73
The existence of a “universal Turing machine,” which can simulate any possible Turing machine that is described on its tape memory, is a further demonstration of the universality and simplicity of computation.
74
In
The Age of Intelligent Machines
, I showed how any computer could be constructed from “a suitable number of [a] very simple device,” namely, the “nor” gate.
75
This is not exactly the same demonstration as a universal Turing machine, but it does demonstrate that any computation can be performed by a cascade of this very simple device (which is simpler than rule 110), given the right software (which in this case would include the connection description of the nor gates).
76

Although we need additional concepts to describe an evolutionary process that can create intelligent solutions to problems, Wolfram’s demonstration of the simplicity and ubiquity of computation is an important contribution in our understanding of the fundamental significance of information in the world.

 

M
OLLY
2004:
You’ve got machines evolving at an accelerating pace. What about humans?

R
AY
:
You mean biological humans?

M
OLLY
2004:
Yes
.

C
HARLES
D
ARWIN
:
Biological evolution is presumably continuing, is it not?

R
AY
:
Well, biology at this level is evolving so slowly that it hardly counts. I mentioned that evolution works through indirection. It turns out that the older paradigms such as biological evolution do continue but at their old speed, so they are eclipsed by the new paradigms. Biological evolution for animals as complex as humans takes tens of thousands of years to make noticeable, albeit still small, differences. The entire history of human cultural and technological evolution has taken place on that timescale. Yet we are now poised to ascend beyond the fragile and slow creations of biological evolution in a mere several decades. Current progress is on a scale that is a thousand to a million times faster than biological evolution
.

N
ED
L
UDD
:
What if not everyone wants to go along with this?

R
AY
:
I wouldn’t expect they would. There are always early and late adopters. There’s always a leading edge and a trailing edge to technology or to any evolutionary change. We still have people pushing plows, but that hasn’t slowed down the adoption of cell phones, telecommunications, the Internet, biotechnology, and so on. However, the lagging edge does ultimately catch up. We have societies in Asia that jumped from agrarian economies to information economies, without going through industrialization
.
77

N
ED
:
That may be so, but the digital divide is getting worse
.

R
AY
:
I know that people keep saying that, but how can that possibly be true? The number of humans is growing only very slowly. The number of digitally connected humans, no matter how you measure it, is growing rapidly. A larger and larger fraction of the world’s population is getting electronic communicators and leapfrogging our primitive phone-wiring system by hooking up to the Internet wirelessly, so the digital divide is rapidly diminishing, not growing
.

M
OLLY
2004:
I still feel that the have/have not issue doesn’t get enough attention. There’s more we can do
.

R
AY
:
Indeed, but the overriding, impersonal forces of the law of accelerating returns are nonetheless moving in the right direction. Consider that technology in a particular area starts out unaffordable and not working very well. Then it becomes merely expensive and works a little better. The next step is the product becomes inexpensive and works really well. Finally, the technology becomes virtually free and works great. It wasn’t long ago that when you saw someone using a portable phone in a movie, he or she was a member of the power elite, because only the wealthy could afford portable phones. Or as a more poignant example, consider drugs for AIDS. They started out not working very well and costing more than ten thousand dollars per year per patient. Now they work a lot better and are down to several hundred dollars per year in poor countries
.
78
Unfortunately with regard to AIDS, we’re not yet at the working great and costing almost nothing stage. The world is beginning to take somewhat more effective action on AIDS, but it has been tragic that more has not been done. Millions of lives, most in Africa, have been lost as a result. But the effect of the law of accelerating returns is nonetheless moving in the right direction. And the time gap between leading and lagging edge is itself contracting. Right now I estimate this lag at about a decade. In a decade, it will be down to about half a decade
.

BOOK: The Singularity Is Near: When Humans Transcend Biology
12.03Mb size Format: txt, pdf, ePub
ads

Other books

Turkish Awakening by Alev Scott
Hidden Scars by Amanda King
Counselor Undone by Lisa Rayne
Now and Always by Lori Copeland
A Dozen Deadly Roses by Kathy Bennett
Bare Nerve by Katherine Garbera
Attack of the Zombies by Terry Mayer
The Comfort Shack by Mark Souza