Read How to Destroy the Universe Online
Authors: Paul Parsons
The drop in binding energy toward the right of the graph means that the nuclei become less stable the bigger they get. This led to the notion of a large atomic nucleus as rather like a big, wobbling droplet of water, ready to break apart at the slightest poke or prod. In 1938, German physicists Otto Hahn and Fritz Strassmann decided to do some poking and prodding using neutron particles, firing them at a sample of the heavy isotope uranium-235 (so named because each nucleus contains a total of 235 particles). Sure enough,
they found that some of the uranium nuclei broke apart under neutron bombardment to form nuclei of the lighter element barium. When researchers in Paris repeated the experiment, they found that as the uranium turned into barium, it was also giving off neutrons. This made Leo Szilard's ears prick up. Neutrons caused fission and each fission reaction gave off more neutrons, which in turn would cause more fission reactions. It was exactly the mechanism he had envisaged to sustain a nuclear chain reaction. Each fission event released a tiny amount of energy, but with 2.5 million billion billion atomic nuclei in every kilogram of uranium, the potential energy that could be unleashed was colossal: about ten million times more energy than burning the same mass of chemical fuel such as oil.
The discovery led Italian-US physicist Enrico Fermi to fire up the world's first ever nuclear reactor at the University of Chicago in 1942. The reactor consisted of a mass of uranium interspersed with rods made of graphite, which soaks up neutron particles. By inserting or withdrawing the rods, the rate at which the chain reaction proceeded could be fine-tuned: pushing them in slightly soaked up more neutrons, slowing the reaction down, while pulling them out speeded it up. Withdrawing them entirely could lead to a runaway reactionâa nuclear explosion. With war already raging around the world, this was a fact that wouldn't be ignored for long.
The first nuclear weapon to be used in anger was a uranium bomb, dropped on the Japanese city of Hiroshima on the morning of August 6, 1945. The blast, equivalent to the simultaneous detonation of about 15,000 tons of conventional TNT explosive, destroyed nearly 70 percent of the city. It killed 80,000 people instantly and many tens of thousands more from prolonged radiation-related injuries. The Hiroshima bomb, codenamed “Little Boy,” was a so-called “gun-type” nuclear weapon. Inside the bomb was a long metal tube with a chunk of uranium positioned at each end. Behind one chunk was a conventional high-explosive charge. To set off the bomb, the explosive was detonated, firing one piece of uranium into the other at high speed. The size of the two pieces was carefully calculated so that when they were brought together they exceeded the critical mass needed to trigger a runaway nuclear chain reaction. When the mass of uranium is less than critical, there are not enough fission reactions taking place to generate sufficient neutrons to keep the chain reaction going, and it shuts down. At the critical mass there are just enough neutrons being created to maintain equilibrium, while above itâtermed “supercritical”âthere are so many neutrons that the reaction rate increases exponentially. Three days after the bombing of Hiroshima, America dropped a second atomic bomb on Japan, this time over the city of Nagasaki. Still a fission device, this
bomb used as fuel the slightly heavier radioactive isotope plutonium-239. And rather than the gun-type detonator of Little Boy this bomb, called “Fat Man,” used chemical explosives surrounding a sphere of plutonium to compress the fuel, squashing it to high density and mimicking the effect of a larger supercritical mass of plutonium. This is known as an “implosion type” nuclear fission device. Fat Man was equivalent to some 21,000 tons of conventional TNT. Despite the higher yield, the hilly terrain disrupted the effects of the blast, making it less devastating than the Hiroshima attackâthough it still claimed nearly 40,000 lives. The sphere of plutonium that did all this was about 8 cm (3 in) in diameter.
Fusion releases its energy by sticking together lighter atomic nuclei to increase the average binding energy per particle in the nucleus. For example, combining two hydrogen nuclei creates deuterium (or heavy hydrogenâan isotope of hydrogen with an extra neutron in its nucleus) plus a positron (the antimatter counterpart of the electron) plus a lot of energy. However, the atomic nucleus is positively charged, so two nuclei brought close together tend to repel one another. To overcome this repulsion, the nuclei need to be slammed together with considerable force. This is usually achieved by heating the fusion fuel. The kinetic
theory developed in the 18th century ascribes the temperature of a gas to the vibration of its atoms and molecules: the higher the temperature, the more vigorously they are jiggling around. Heat a gas sufficiently and the vibrations are forceful enough to surmount the electrical repulsion between nuclei. The temperatures required are colossalâupward of 8 million °Câwhich is why fusion reactions are sometimes described as “thermonuclear.” Once this temperature has been reached and fusion has begun, the energy released sustains the process, leading again to a chain reaction.
Thermonuclear energy is the principal power source of stars, where the core temperature is easily hot enough. In a fusion weapon, such as a hydrogen bomb, these temperatures have to be generated artificially. This is usually achieved using fission to kick start the process. A small implosion-design fission bomb releases X-rays that compress a cylinder of fusion fuel. The cylinder has a plutonium core, which begins a second fission reaction under compression, and this in turn ignites fusion. The yield from a fusion device can be many times higher than that from a fission bomb. Most of today's nuclear bombs are fusion devices.
Nuclear weapons derive their lethality from three key effects: heat, blast and radiation. Heat is given off by
the chain reaction, creating a fireball where the temperatures can reach thousands (in the case of a fission bomb) or millions (for a fusion device) of degrees C. Heat has the longest reach of all three effects, lighting fires over a wide area. The blast wave causes most deaths by collapsing buildings and flying debris, and is lethal within a radius about half that affected by heat. Radiation comes in two forms. During the explosion, radiation emitted by the fission process can fatally damage biological cells in the short term, while radioactive ash thrown up by the explosion into the mushroom cloud over the blast site rains back to the ground as fall-out, causing long-term health problems including cancer. Mushroom clouds themselves vary greatly in height. The cloud made by Fat Man over Nagasaki was just a few hundred meters high. By contrast the cloud produced by the largest nuclear weapon ever detonated, the Soviet Tsar Bomba, rose up 64 km (40 miles). Detonated in a test in 1961, this weapon unleashed the same explosive force as 57 million tons of TNT.
Nuclear weapons are a modern-day Pandora's Box. Some argue that the specter of nuclear war has served as a peace-keeping deterrent. That can be little consolation to the children of Hiroshima and Nagasaki, whose forebears were in the wrong place at the wrong time the day the box was opened.
⢠The bright Sun
⢠Matter waves
⢠Solar cells
⢠Solar thermal energy
⢠Star power
The energy pouring out from our Sun in a single second is enough to meet planet Earth's energy demands for over 800,000 years. Even the small proportion of this energy falling on the planet could sustain us for 1,000 years. It's not surprising then that many scientists believe solar energy to be one of the most promising solutions to the world's energy woes. So how does it work and why aren't we using more of it?
Greek philosopher Archimedes is said to have realized the power of the Sun over 2,000 years ago, when he used a complex arrangement of mirrors to turn its energy into a heat ray to fend off an invading Roman army. Indeed, the Sun is a veritable powerhouse,
kicking out energy at the rate of 400 million billion billion watts. That could run an awful lot of 100 W lightbulbs. The Sun derives all this power from nuclear fusion: bonding together atoms of hydrogen in its core to form heavier atomic nuclei and liberate energy in the process (see
How to build an atomic bomb
). According to Einstein's equation
E=mc
2
, which says that to turn a mass (
m
) into energy (
E
) you just multiply by the speed of light (
c
) squared, the Sun's copious power output means it's losing weight at the alarming rate of 4 million tons every second. But it is so massiveâ2 billion billion billion tonsâthat it could keep up this drastic weight-loss regime for another 15 million billion years, or about a million times the present age of the Universe.
The key breakthrough in the development of solar electricity was the discovery of the photoelectric effect by German physicist Heinrich Hertz in 1887. Hertz observed that certain metals could be made to emit electronsâso generating an electric currentâwhen exposed to electromagnetic radiation. His apparatus consisted of a sparking device that would fire when electrons were released from the metal under exposure to sunlight. But he couldn't work out why the size of the spark decreased when glass was placed in front of the metal, but not when a crystal of quartz was put there instead. It was later realized that glass blocks
ultraviolet light, whereas quartz doesn'tâmeaning that only high-frequency ultraviolet light is able to generate a photoelectric current. Why this should be though was still a mystery. Albert Einstein finally solved the problem in 1905 when he used the idea that light can sometimes behave as particles as well as waves. And that it was collisions with these particles, or “quanta” of light, like collisions between billiard balls, that were knocking electrons from the metal.
The realization that light waves could be quantized in this way was one of the very first building blocks of quantum theory, a new way of understanding the physics of subatomic particles that was developed primarily during the first half of the 20th century. The notion was first put forward in 1901 by the German physicist Max Planck, who was developing a theory of heat radiation. Planck found that he could explain the characteristics of the electromagnetic radiation given off by a hot object if he assumed that the radiation was emitted in discrete packetsâthe quantaâeach with an energy given by their frequency multiplied by a tiny number, now known as Planck's constant.
Einstein made the jump to interpreting the quanta of light as actual solid particles, named photons. The energy of each of the light quanta, as calculated by Planck, could then be thought of as the kinetic energy of a solid photon. This is the energy that's possessed by
any solid body as a result of its motion. And, as anyone who's ever caught a cricket ball or a baseball knows, anything with kinetic energy can deliver a forceful impact. Einstein calculated the minimum kinetic energy of a photon needed to turf an electron from a metal, and then worked backward using Planck's equation to find out what frequency the corresponding electromagnetic waves would need to have. His findings explained perfectly Hertz's observation that only waves above a particular threshold frequencyâcorresponding to ultraviolet lightâcan stimulate photoelectric emission.
Later, the French physicist Louis de Broglie extended the idea that light waves can behave as particles by showing that the reverse is true tooâthat particles can sometimes be thought of as “matter waves,” with a characteristic wavelength related to their solid-particle energy. This relationship between radiation and matter is known as waveâparticle duality. It's a recurring theme in quantum theory. For example, electron particles undergo diffraction when they pass through the gaps in a crystal lattice aperture, while photons of light exert a measurable force as they rain down on a surface.
Modern solar panels work using a variation on the photoelectric effect, called the photovoltaic effect.
This is an electrical phenomenon that takes place in semiconductorsâmaterials that aren't perfect conductors but aren't perfect insulators either. Electrons can flow through a semiconductor to a limited degree, but so can the positively charged “holes” in the material that the mobile electrons leave behindâand this can lead to electrical materials with interesting properties. They come in two types. Positively charged semiconductors (with an excess of holes) are known as p-type, while those with more negative electrons are called n-type.
In a solar cell, the photovoltaic effect takes place at junctions between these different types of semiconductor materials, known as p-n junctions. The materials in these junctions are both normally based on silicon that has been “doped”âthat is, had impurities addedâto skew it toward either n-type or p-type. For instance, doping silicon with phosphorous gives an n-type material; adding boron makes the silicon p-type. A photon that's absorbed by the silicon at the p-n junction in a photovoltaic device won't just generate an electron, as in the photoelectric effect, but an electron-hole pair. The negatively charged electrons then flow toward the positive p-type material in the junctionâbecause opposite charges attractâwhile, for the same reason, the positive holes flow toward the n-type side of the device. Negative charge flowing, say, to the left and positive charge flowing to the right is the same as a large net flow of negative charge
all moving to the left. This is how solar cells generate an electric current. The photovoltaic effect was discovered in the 19th century. However, the first purpose-built solar cell made from p-n semiconductor junction devices wasn't switched on until the 1940s. Early solar cells were woefully inefficient, turning only 1 percent of the radiant energy falling on them into electricity. Today there exist cells with efficiencies of 30 percent. What does that mean in terms of their electricity yield? The amount of sunlight arriving at the surface of the Earth is 950 watts per square meter. So at 30 percent efficiency, a one-meter-square solar panel can generate about 285 W of electricityâsufficient to run a computer or a TV, but not enough to boil a kettle. As solar panel technology has improved, so the price has inevitably dropped too. This has led some enterprising individuals to install solar panels on the roofs of their houses to contribute to their domestic energy requirements. It's estimated that a 2 kW home solar array can provide about half the energy needs of an average family.