Although philosophers tend to be interested in the majority of cases where genes are not destiny, it is worth remembering that in some situations they most definitely are. If you have two copies of the sickling version of the haemoglobin gene, you will suffer from terrible anaemia and other debilitating symptoms. Nothing in your environment or upbringing seems to be able to alter that. Even more tragically, if you carry a single copy of the
Huntingtin
gene containing a CAG trinucleotide that is repeated potentially sixty times over, then you will eventually suffer from Huntington’s disease, a neurodegenerative disorder. This genetic disease shows varying symptoms in different individuals, partly as a result of differences in the number of CAG repeats, but it is always fatal.
58
In many cases, however, genes are not the ultimate determiner or cause of biological phenomena. In the example of sex determination in crocodiles given earlier, the genes are constant, and the proportion of male and female crocodiles is determined not directly by the genes but by the way in which the temperature affects the activity of those genes. In that case, the decisive causal factor is temperature, but it does not act alone. Temperature exerts its effects by altering the activity of sex-determining genes, through the production of proteins and RNA molecules that are themselves the product of other genes. Genes need cells, which they create, to realise the conditional instructions that they contain, and the environment has to be permissive. However, in similar conditions, similar effects will tend to be produced. The way in which those effects percolate out into the anatomy, physiology and behaviour of a whole organism can be unpredictable, making it hard to draw a direct line between a particular gene and a particular character.
The behaviour geneticists Doug Wahlsten and John Crabbe explored this problem in 1999 when they got separate laboratories to carry out the same behavioural experiments on the same inbred strains of mice. There were systematic differences in the behaviour of the mice in different laboratories, indicating that the route from gene to behaviour depends on many complex factors, including the experimental set-up and the immediate environment.
59
That does not mean that it is impossible to test reliably for genetic effects on behaviour: in 2006, Wahlsten and Crabbe reported that inbred mice strains can show very high levels of behavioural consistency over time (for example in locomotor activity or in preference for ethanol), even when the experiments were conducted with a gap of fifty years.
60
These results are not particularly surprising to anyone who has done an experiment on the genetics of behaviour. Organisms are not robots, and their continual interaction with the environment throughout their development and during the experiment creates genetic, physiological and behavioural noise that can affect the results. That does not mean to say that genes are not involved in determining anatomy, physiology and behaviour; it simply means that it is sometimes extremely hard to study these effects.
Attempts to detect genetic factors underlying intelligence have proved particularly problematic. There are clear genetic effects on cognitive ability: no chimpanzee will ever be able to act, speak and think like the average human. That flows from the relatively small differences in our DNA – our genes produce two species with different levels of intellectual ability. The problems begin when it comes to studying the differences in intelligence (whatever that might be) that can be observed between humans: pinning down what part is due to our slightly different sets of genes is very difficult. In 2014, a study of more than 100,000 people sought to correlate genetic variability with variations in cognitive ability and educational attainment.
61
The authors found just three genetic variations across the whole genome that might be implicated in the cognitive differences they were measuring, and these all had extremely small effects. There are undoubtedly genetic differences between humans that affect our intelligence, but it seems probable there are very many such genetic factors, each contributing a tiny amount, with any individual having a mixture of a wide range of these genes. The lesson of such studies is that if the character that is being investigated is largely determined by the environment, as seems reasonable to imagine is the case for educational attainment, then it will be difficult to detect genetic effects.
If it turns out that there are important genetic factors underlying individual differences in human cognitive ability, the challenge would be for society to decide how to use – or not – that information. However, I would be very surprised if this were the case. The fact that genes contain information that determines the sequence of nucleic acids and proteins does not imply that all characters are genetically determined. Some are, many are not. Biology is complicated.
*
Describing the content of genes as information, and viewing the activity of cells and organisms as involving the movement of information, puts all levels of life into a single framework. As Crick put it, life is characterised by the flow of energy, the flow of matter and the flow of information. Information flow involves the activity of specific molecules and, at the level of a whole organism, of cells or groups of cells. Conceptualising the whole process as having an underlying unity in terms of information provides a context that helps explain how molecules, cells, organs and organisms interact and are coordinated.
This reflects one of the central conceptual approaches in the history of the genetic code and of gene function, the cybernetic vision. Cybernetics – the study of control and apparently purposive behaviour in animals and machines – exerted tremendous influence in the late 1940s and throughout the 1950s, because it appeared that it would form a new science, providing a way of uniting all levels of biology with engineering and mathematics. That did not happen, and the tide of enthusiasm for cybernetics gradually ebbed when it became evident that, beyond its emphasis on control and the existence of negative feedback loops to produce apparently purposive behaviour, cybernetics did not provide a predictive framework for future discoveries.
Nonetheless, cybernetics was important in helping Jacob and Monod understand the data from their operon experiments and thereby contributed to our understanding of gene regulation. In 1970, Monod attempted to explain the organisation of living systems in a book entitled
Chance and Necessity.
Even though Monod was writing after the fashion for cybernetics had begun to wane, he still argued that organisms were quite literally cybernetic structures, consisting of patterns of control and feedback that were embodied by the action of specific molecules. He also argued that the components of many cellular molecular networks interact in a way that is based on information, not on chemical structure. For example, when enzymes are induced by the presence of their substrate, this does not occur through a direct chemical link but through the activity of an intermediary protein and the gene that codes for it. The only way of understanding this process is as a flow of information that passes through each component, taking different physical forms as it goes.
62
Modern analytical techniques enable scientists to understand such biochemical interactions in exquisite detail. This has given rise to the field known as systems biology, which studies patterns of chemical interaction and gene regulation. Some claim that systems biology will embrace all the levels of life, right up to the ecosystem.
63
For the moment, research under the systems biology label focuses on the chemical processes taking place in the cell. The vast data sets produced by such analyses, and the ability of modern computers to process and model those data, have inevitably led to a resurgence of interest in cybernetic approaches to biochemical processes, with a focus on the importance of feedback.
64
However, despite the confidence of both Jacob and Monod that cybernetics would provide a way of understanding how the genetic code turns into instructions, the influence of cybernetics on modern science remains at the level of broad effects rather than any precise detail. That is even true in the field of neurobiology: although neural networks are clearly processing information and control, for most students and scientists cybernetics is a dimly remembered ancestor, rather than an essential part of their experimental approach.
65
Science, like other parts of human culture, can be influenced by fashion, and by our apparently endless appetite for novelty. When fashions change in science, it is not simply because people become bored and crave change, but because the old approach or technique has at best proved disappointing, at worst a failure. The influence of cybernetics and information theory on genetics can be seen in this way. In the 1940s and 1950s these two related approaches had a massive impact on the development of biology as a whole and on molecular genetics in particular. In the end, their influence waned as they failed to provide a framework that could stimulate further discovery. Both views ended up influencing genetics as vital metaphors and ways of viewing the world, not as essential theoretical foundations. This metaphorical role remains today, and it explains why scientists are so comfortable in saying that genes contain information and that they exert control over cellular networks.
* These first reactions may not have occurred near deep-sea vents, but instead in small vesicles made of fatty acids. This is the view of Jack Szostak, who has been able to create such an artificial protocell and get RNA to replicate spontaneously within it (Adamala and Szostak, 2013).
CONCLUSION
In his book
Ways of Knowing,
the historian John Pickstone pointed out something that might seem obvious: science is a form of work. He argued that changes in how scientists gain their knowledge of the world can be interpreted in terms of changes in the organisation of work that have also occurred in manufacturing, which in different phases has been dominated in turn by what he called craft, rationalised production and systematic invention.
1
The race to crack the genetic code was mostly a matter of craft. Individuals or small groups were struggling with ideas and concepts as much as they were with facts; they were not only trying to understand what would be the right experiment to answer a question, they had to work out what the question was. Only in its final phases, after the breakthrough of Nirenberg and Matthaei, did craft partially cede pride of place to something like rationalised production, as the answer became visible and knowable, although it had not yet been attained. During those years from 1961 to 1967, cracking the code gradually became as much about biochemical technology as it was about imagination, even if the development and application of that technology required a great deal of craft and insight.
These discoveries created a revolution in our understanding and in our ways of thinking about life, a revolution that changed how science is done, shaping both our present and our future. In many respects we are now in a phase of systematic invention, in which new discoveries are being made in a more coordinated way, often involving large teams. Through the development of technology, we are now able to sequence the genomes of whole organisms in a matter of weeks – and soon even more quickly than that. Norbert Wiener, the founder of cybernetics, was concerned about how automation would alter factory work. It has most certainly transformed how science is done: robots can now decipher our genes, turning our genetic code into digital data that can then be explored anywhere in the world.
In 1991, just as the genome projects were being dreamt up, Wally Gilbert published an article in
Nature
in which he looked to the future.
2
Quite remarkably, he pretty much described the world we live in, suggesting that computers around the world would be hooked into databases, and that biologists would need to learn computing techniques to cope with the tide of data, investigating gene function first through a comparison of genes in different species rather than in an experiment. Gilbert pointed to skills that had already been lost in the brief history of molecular genetics, such as the ability to isolate restriction enzymes in the lab, which had been rendered obsolete by the availability of commercial products, and he rightly predicted that this process would continue. He also recognised that this rolling change was nothing new – once upon a time scientists blew their own glassware; later they bought it from a catalogue. The advent of automated sequencing of whole genomes is a huge step forwards – few scientists who went through the drudgery of hand-sequencing genes would want to return to those days. Scientists can now think about the biology instead of struggling with the chemistry.
Who those scientists are and how they work together have also changed dramatically. The work that resulted in the cracking of the genetic code was virtually entirely carried out by men, with a few exceptions – in chronological order, the women featured here were Harriet Ephrussi-Taylor, Martha Chase, Rosalind Franklin, Marianne Grunberg-Manago, Maxine Singer, Leslie Barnett and Norma Heaton. Some of these women were leading scientists, others were mid-level researchers, still others were technicians. Women now have a far more significant role: most fields of biology include leading female scientists, and it is quite usual for women to run laboratories. Nevertheless, although there are generally more women studying biology at university, at PhD level this becomes approximately equal numbers of male and female students, and there is then a growing proportion of men as you go up the academic scale, culminating in an overwhelmingly male professoriate. We are still far from equality between the sexes.