Homo Deus: A Brief History of Tomorrow (18 page)

BOOK: Homo Deus: A Brief History of Tomorrow
12.12Mb size Format: txt, pdf, ePub

Finally, some scientists concede that consciousness is real and may actually have great moral and political value, but that it fulfils no biological function whatsoever. Consciousness is the biologically useless by-product of certain brain processes. Jet engines roar loudly, but the noise doesn’t propel the aeroplane forward. Humans don’t need carbon dioxide, but each and every breath fills the air with more of the stuff. Similarly, consciousness may be a kind of mental pollution produced by the firing of complex neural networks. It doesn’t do anything. It is just there. If this is true, it implies that all the pain and pleasure experienced by billions of creatures for millions of years is just mental pollution. This is
certainly a thought worth thinking, even if it isn’t true. But it is quite amazing to realise that as of 2016, this is the best theory of consciousness that contemporary science has to offer us.

Maybe the life sciences view the problem from the wrong angle. They believe that life is all about data processing, and that organisms are machines for making calculations and taking decisions. However, this analogy between organisms and algorithms might mislead us. In the nineteenth century, scientists described brains and minds as if they were steam engines. Why steam engines? Because that was the leading technology of the day, which powered trains, ships and factories, so when humans tried to explain life, they assumed it must work according to analogous principles. Mind and body are made of pipes, cylinders, valves and pistons that build and release pressure, thereby producing movements and actions. Such thinking had a deep influence even on Freudian psychology, which is why much of our psychological jargon is still replete with concepts borrowed from mechanical engineering.

Consider, for example, the following Freudian argument: ‘Armies harness the sex drive to fuel military aggression. The army recruits young men just when their sexual drive is at its peak. The army limits the soldiers’ opportunities of actually having sex and releasing all that pressure, which consequently accumulates inside them. The army then redirects this pent-up pressure and allows it to be released in the form of military aggression.’ This is exactly how a steam engine works. You trap boiling steam inside a closed container. The steam builds up more and more pressure, until suddenly you open a valve, and release the pressure in a predetermined direction, harnessing it to propel a train or a loom. Not only in armies, but in all fields of activity, we often complain about the pressure building up inside us, and we fear that unless we ‘let off some steam’, we might explode.

In the twenty-first century it sounds childish to compare the human psyche to a steam engine. Today we know of a far more sophisticated technology – the computer – so we explain the human
psyche as if it were a computer processing data rather than a steam engine regulating pressure. But this new analogy may turn out to be just as naïve. After all, computers have no minds. They don’t crave anything even when they have a bug, and the Internet doesn’t feel pain even when authoritarian regimes sever entire countries from the Web. So why use computers as a model for understanding the mind?

Well, are we really sure that computers have no sensations or desires? And even if they haven’t got any at present, perhaps once they become complex enough they might develop consciousness? If that were to happen, how could we ascertain it? When computers replace our bus driver, our teacher and our shrink, how could we determine whether they have feelings or whether they are just a collection of mindless algorithms?

When it comes to humans, we are today capable of differentiating between conscious mental experiences and non-conscious brain activities. Though we are far from understanding consciousness, scientists have succeeded in identifying some of its electrochemical signatures. To do so the scientists started with the assumption that whenever humans report that they are conscious of something, they can be believed. Based on this assumption the scientists could then isolate specific brain patterns that appear every time humans report being conscious, but that never appear during unconscious states.

This has allowed the scientists to determine, for example, whether a seemingly vegetative stroke victim has completely lost consciousness, or has merely lost control of his body and speech. If the patient’s brain displays the telltale signatures of consciousness, he is probably conscious, even though he cannot move or speak. Indeed, doctors have recently managed to communicate with such patients using fMRI imaging. They ask the patients yes/no questions, telling them to imagine themselves playing tennis if the answer is yes, and to visualise the location of their home if the answer is no. The doctors can then observe how the motor cortex lights up when patients imagine playing tennis (meaning
‘yes’), whereas ‘no’ is indicated by the activation of brain areas responsible for spatial memory.
7

This is all very well for humans, but what about computers? Since silicon-based computers have very different structures to carbon-based human neural networks, the human signatures of consciousness may not be relevant to them. We seem to be trapped in a vicious circle. Starting with the assumption that we can believe humans when they report that they are conscious, we can identify the signatures of human consciousness, and then use these signatures to ‘prove’ that humans are indeed conscious. But if an artificial intelligence self-reports that it is conscious, should we just believe it?

So far, we have no good answer to this problem. Already thousands of years ago philosophers realised that there is no way to prove conclusively that anyone other than oneself has a mind. Indeed, even in the case of other humans, we just assume they have consciousness – we cannot know that for certain. Perhaps I am the only being in the entire universe who feels anything, and all other humans and animals are just mindless robots? Perhaps I am dreaming, and everyone I meet is just a character in my dream? Perhaps I am trapped inside a virtual world, and all the beings I see are merely simulations?

According to current scientific dogma, everything I experience is the result of electrical activity in my brain, and it should therefore be theoretically feasible to simulate an entire virtual world that I could not possibly distinguish from the ‘real’ world. Some brain scientists believe that in the not too distant future, we shall actually do such things. Well, maybe it has already been done – to you? For all you know, the year might be 2216 and you are a bored teenager immersed inside a ‘virtual world’ game that simulates the primitive and exciting world of the early twenty-first century. Once you acknowledge the mere feasibility of this scenario, mathematics leads you to a very scary conclusion: since there is only one real world, whereas the number of potential virtual worlds is infinite, the probability that you happen to inhabit the sole real world is almost zero.

None of our scientific breakthroughs has managed to overcome this notorious Problem of Other Minds. The best test that scholars have so far come up with is called the Turing Test, but it examines only social conventions. According to the Turing Test, in order to determine whether a computer has a mind, you should communicate simultaneously both with that computer and with a real person, without knowing which is which. You can ask whatever questions you want, you can play games, argue, and even flirt with them. Take as much time as you like. Then you need to decide which is the computer, and which is the human. If you cannot make up your mind, or if you make a mistake, the computer has passed the Turing Test, and we should treat it as if it really has a mind. However, that won’t really be a proof, of course. Acknowledging the existence of other minds is merely a social and legal convention.

The Turing Test was invented in 1950 by the British mathematician Alan Turing, one of the fathers of the computer age. Turing was also a gay man in a period when homosexuality was illegal in Britain. In 1952 he was convicted of committing homosexual acts and forced to undergo chemical castration. Two years later he committed suicide. The Turing Test is simply a replication of a mundane test every gay man had to undergo in 1950 Britain: can you pass for a straight man? Turing knew from personal experience that it didn’t matter who you really were – it mattered only what others thought about you. According to Turing, in the future computers would be just like gay men in the 1950s. It won’t matter whether computers will actually be conscious or not. It will matter only what people think about it.

The Depressing Lives of Laboratory Rats

Having acquainted ourselves with the mind – and with how little we really know about it – we can return to the question of whether other animals have minds. Some animals, such as dogs, certainly pass a modified version of the Turing Test. When humans try to determine whether an entity is conscious, what we usually look for is not mathematical aptitude or good memory, but rather the
ability to create emotional relationships with us. People sometimes develop deep emotional attachments to fetishes like weapons, cars and even underwear, but these attachments are one-sided and never develop into relationships. The fact that dogs can be party to emotional relationships with humans convinces most dog owners that dogs are not mindless automata.

This, however, won’t satisfy sceptics, who point out that emotions are algorithms, and that no known algorithm requires consciousness in order to function. Whenever an animal displays complex emotional behaviour, we cannot prove that this is not the result of some very sophisticated but non-conscious algorithm. This argument, of course, can be applied to humans too. Everything a human does – including reporting on allegedly conscious states – might in theory be the work of non-conscious algorithms.

In the case of humans, we nevertheless assume that whenever someone reports that he or she is conscious, we can take their word for it. Based on this minimal assumption, we can today identify the brain signatures of consciousness, which can then be used systematically to differentiate conscious from non-conscious states in humans. Yet since animal brains share many features with human brains, as our understanding of the signatures of consciousness deepens, we might be able to use them to determine if and when other animals are conscious. If a canine brain shows similar patterns to those of a conscious human brain, this will provide strong evidence that dogs are conscious.

Initial tests on monkeys and mice indicate that at least monkey and mice brains indeed display the signatures of consciousness.
8
However, given the differences between animal brains and human brains, and given that we are still far from deciphering all the secrets of consciousness, developing decisive tests that will satisfy the sceptics might take decades. Who should carry the burden of proof in the meantime? Do we consider dogs to be mindless machines until proven otherwise, or do we treat dogs as conscious beings as long as nobody comes up with some convincing counter-evidence?

On 7 July 2012 leading experts in neurobiology and the cognitive sciences gathered at the University of Cambridge, and signed the Cambridge Declaration on Consciousness, which says that ‘Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviours. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.’
9
This declaration stops short of saying that other animals are conscious, because we still lack the smoking gun. But it does shift the burden of proof to those who think otherwise.

Responding to the shifting winds of the scientific community, in May 2015 New Zealand became the first country in the world to legally recognise animals as sentient beings, when the New Zealand parliament passed the Animal Welfare Amendment Act. The Act stipulates that it is now obligatory to recognise animals as sentient, and hence attend properly to their welfare in contexts such as animal husbandry. In a country with far more sheep than humans (30 million vs 4.5 million), that is a very significant statement. The Canadian province of Quebec has since passed a similar Act, and other countries are likely to follow suit.

Many business corporations also recognise animals as sentient beings, though paradoxically, this often exposes the animals to rather unpleasant laboratory tests. For example, pharmaceutical companies routinely use rats as experimental subjects in the development of antidepressants. According to one widely used protocol, you take a hundred rats (for statistical reliability) and place each rat inside a glass tube filled with water. The rats struggle again and again to climb out of the tubes, without success. After fifteen minutes most give up and stop moving. They just float in the tube, apathetic to their surroundings.

You now take another hundred rats, throw them in, but fish them out of the tube after fourteen minutes, just before they are about to despair. You dry them, feed them, give them a little rest – and then throw them back in. The second time, most rats struggle for twenty minutes before calling it quits. Why the extra six minutes? Because the memory of past success triggers the release of some biochemical in the brain that gives the rats hope and delays the advent of despair. If we could only isolate this biochemical, we might use it as an antidepressant for humans. But numerous chemicals flood a rat’s brain at any given moment. How can we pinpoint the right one?

For this you take more groups of rats, who have never participated in the test before. You inject each group with a particular chemical, which you suspect to be the hoped-for antidepressant. You throw the rats into the water. If rats injected with chemical A struggle for only fifteen minutes before becoming depressed, you can cross out A on your list. If rats injected with chemical B go on thrashing for twenty minutes, you can tell the CEO and the shareholders that you might have just hit the jackpot.

Other books

Under Threat by Robin Stevenson
Due Process by Jane Finch
Sunkissed by Daniels, Janelle
Black Ghost Runner by M. Garnet
Second Son of a Duke by Gwen Hayes
Roads to Quoz: An American Mosey by Heat-Moon, William Least
The Queen's Consort by Brown, Eliza
Wine of the Dreamers by John D. MacDonald