The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World (27 page)

BOOK: The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World
9.39Mb size Format: txt, pdf, ePub
ads

In retrospect, a lot of things went right in the first half of 2012 to enable a Higgs discovery earlier than most people expected. The LHC was going full steam, collecting more events in just a few months of operation than it had in all of 2011. Pileup was a challenge, but the data analysts met it heroically, and the overwhelming fraction of events were successfully reconstructed. The higher energy pushed up the rate at which Higgs bosons were produced. And the teams had honed their analysis routines, managing to squeeze more significance out of their data than before. All these improvements ended up giving particle physicists Christmas in July.

What is it?

After the seminars were over, Incandela was reflective. “You often think that, once you’ve discovered something, it’s an end. What I’ve learned in science is that it’s almost always a beginning. There’s almost always something very big, just right there, that is within reach, and you just have to go for it. So you can’t let down your guard!”

There is no question that CMS and ATLAS have found a new particle. There is very little question that the new particle resembles the Higgs boson; its decay rates into different channels match up roughly with what the Standard Model Higgs is expected to do if its mass is 125 GeV or so. But there’s plenty of reason to wonder whether it really is the simplest Higgs, or something more subtle. There are tiny hints in the data that may indicate that this new particle is not just the minimal Higgs. It’s far too early to tell whether those hints are real; they could easily go away, but we can rest assured that the experiments will be following up on them to figure out what’s really going on.

Remember that particles don’t appear in the detector with labels. When we say that we’ve found something consistent with a Higgs boson, we’re referring to the fact that the Standard Model makes very specific predictions once the mass of the Higgs is fixed. There are no other free parameters; knowing that one number allows us to say precisely how many decays there will be into each channel. Saying that we see something like the Higgs is saying that we see the right amount of excess events in all the channels where they should be visible, not just in one.

The figures included in the color insert show the data from ATLAS and CMS in 2011 and early 2012, looking specifically at collisions that created two photons. What we see are the numbers of events in which the two photons total up to a specific energy. Notice how few of these events there actually are. The experiment sees hundreds of millions of interactions per second, of which a couple hundred per second pass through the trigger and are recorded for posterity; but in a year’s worth of data, we get only a thousand or so events at each energy.

The dashed curve in the figure is the prediction for the background—what you would expect without a Higgs. The solid line is what happens when we include the ordinary Standard Model Higgs, with a mass of 125 GeV. Both curves show a small bump with a couple hundred more events than expected. You can’t say which events are Higgs decays, and which decays are background, but you can ask whether there is a statistically significant excess. There is.

Closer inspection reveals something funny about these data. One of the reasons we were surprised to find the Higgs so quickly in 2012 is that the experiments actually observed more events than they should have. The significance of the two-photon bump in the ATLAS data is 4.5 sigma, but with the number of collisions analyzed the Standard Model predicts that we should have reached only 2.4 sigma. Likewise, in CMS, the significance was 4.1 sigma, but it was expected to reach only 2.6 sigma.

In other words, there were more excess events with two photons than we should have seen. Not too many more; the sizes of the bumps are a bit bigger than expected but still within the known uncertainties. But the fact that they are consistent between both experiments (and consistent with ATLAS’s result from 2011 alone) is intriguing. There is no question we will need more data to see whether this discrepancy is real or just a tease.

The CMS data presented another small but noticeable puzzle. While ATLAS stuck with the robust channels of two photons or four charged leptons, CMS also analyzed three noisier channels: tau-antitau, bottom-antibottom, and two Ws. As might be expected, the bottom-antibottom and WW channels didn’t give statistically significant results (although more data will certainly improve the situation). The tau-antitau analysis, however, was a puzzle: No excess was seen at 125 GeV, even though the Standard Model predicts that it should be. This was not quite a statistically significant discrepancy, but it’s interesting. Indeed, the slight tension with the tau data is what brought the final significance of the full CMS analysis down to 4.9 sigma, even though the two-photon and four-lepton channels alone had achieved five sigma.

What could be going on? None of these hints is serious enough to be sure that anything at all is going on, so it might not be worth taking the discrepancies too seriously. But as theorists, that’s what we do for a living. Within a day or two after the seminars, theory papers were already appearing online, attempting to sort it all out.

It’s easy to give one simple example of the kind of thing that people are thinking about. Remember how the Higgs decays into two photons. Because photons are massless, and therefore don’t couple directly to the Higgs, the only way this can happen is via some intermediate virtual particle that is both massive (so it couples to the Higgs) and electrically charged (so that it couples to photons).

By the rules of Feynman diagrams, we are instructed to calculate the rate for this process by adding up independent contributions from all the different massive charged particles that could appear in the loop inside this diagram. We know what the Standard Model particles are, so that’s not hard to do. But new particles can easily change the answer by contributing to those virtual processes, even if we’ve not yet been able to detect them directly. So the anomalously large number of events might be the first signal of particles beyond the Standard Model, helping the Higgs decay into two photons.

Details matter, of course; if the new particles you have in mind also change the rates of other measured processes, you might be in trouble. But it’s exciting to think that by studying the Higgs we might be learning not only about that particle itself but also about other particles yet to be found.

Don’t let down your guard.

TEN

SPREADING THE WORD

In which we draw back the curtain on the process by which results are obtained and discoveries are communicated.

W
ith all the solemn British rectitude he could summon, correspondent John Oliver was putting tough questions to Walter Wagner, the man who had gone to court to stop the Large Hadron Collider from beginning operations. A serious charge had been leveled: the LHC was a hazard to the very existence of life on earth.

JO:
So, roughly speaking, what are the chances the world is going to be destroyed? Is it one in a million, one in a billion?
WW:
Well, the best we can say right now is about a one-in-two chance.
JO:
Hold on a second. It’s . . . fifty-fifty?
WW:
Yeah, fifty-fifty . . . If you have something that can happen, and something that won’t necessarily happen, it’s going to either happen, or it’s going to not happen, and, so, the best guess is one in two.
JO:
I’m not sure that’s how probability works, Walter.

As the LHC was starting up in 2008, physicists tried their best to spread the word that this was a machine that would help us find the Higgs boson, perhaps reveal supersymmetry for the first time, and possibly discover exciting and exotic phenomena such as dark matter or extra dimensions. But against this uplifting story of human curiosity triumphant, a countervailing narrative struggled for people’s attention: The LHC was a potentially dangerous experiment that would re-create the Big Bang and potentially destroy the world.

At the time, the mad-scientists-out-of-control scenario was winning the competition for attention. It’s not that journalists were willing to ignore the truth and seek out sensationalism for its own sake. (At least, not most of them. In the United Kingdom, the
Daily Mail
tabloid ran a big headline,
ARE WE ALL GOING TO DIE NEXT WEDNESDAY?
) Rather, much like the label “God Particle,” the disaster scenarios seemed to be a mandatory part of any news story. Once the idea is posed that
just maybe
the LHC could kill everyone on earth—even if it was something of a long shot—that’s the question that people wanted to see addressed. Added to the mix was Walter Wagner, a litigious former nuclear safety officer, who brought a quixotic suit against the LHC in Hawaii. After the case was thrown out of court on (fairly evident) jurisdictional grounds, Wagner appealed to federal court. A three-judge panel finally dismissed the case in 2010, with a pithy conclusion:

Accordingly, the alleged injury, destruction of the earth, is in no way attributable to the U.S. government’s failure to draft an environmental impact statement.

CERN and other physics organizations took the need to proceed safely very seriously, sponsoring multiple expert reports on the subject, all of which concluded that the risk of disaster was completely negligible. Oliver’s interview, which allowed Wagner to discredit himself with his own words, was one of the very few news reports to take an appropriate angle on the topic. It appeared on Jon Stewart’s
The Daily Show,
a satirical news program from Comedy Central channel. Only a comedy program was smart enough to treat the LHC disaster worry as the farce that it was.

One thing working against the physicists was their natural inclination to be both precise and honest, often to the detriment of getting their point across. The fears that the LHC could destroy the world were based in part on respectable, if speculative, physical theories. If gravity is much stronger than usual at the high energies of an LHC particle collision, for example, it’s possible to make tiny black holes. Everything we know about physics predicts that such a black hole will evaporate harmlessly away. But it’s possible that everything we know is wrong. So maybe black holes are formed and are stable, and the LHC will produce them, and they will settle into the earth’s core and gradually eat at it from the inside, leading to a collapse of the planet over the course of time. You can calculate how much time it would actually take, and the answer turns out to be much longer than the current age of the universe. Of course, your calculations could be incorrect. But in that case, collisions of high-energy cosmic rays should be producing tiny black holes all over the universe. (The LHC isn’t doing anything the universe doesn’t do at much higher energies all the time.) And those black holes should eat up white dwarfs and neutron stars, but we see plenty of white dwarfs and neutron stars in the sky, so that can’t be quite right either.

You get the point. There are many variations on the theme, but the general pattern is universal: We can come up with very speculative scenarios that seem dangerous, but upon closer inspection the most dangerous possibilities are already ruled out by other considerations. But because scientists like to be precise and consider many different possibilities, they tend to dwell lovingly on all the scary-sounding scenarios before reassuring us that they are all quite unlikely. Every time they should have said, “No!” they tended to say, “Probably not, the chance is really very small,” which doesn’t have the same impact. (A shining counterexample is CERN theorist John Ellis, who was asked by
The Daily Show
what chance there was that the LHC would destroy the earth, and simply replied, “Zero.”)

Imagine opening your refrigerator and reaching for a jar of tomato sauce, planning to make pasta for tonight’s dinner. An alarmist friend grabs you before you can open the lid, saying, “Wait! Are you sure that opening that jar won’t release a mutant pathogen that will quickly spread and wipe out all life on earth?” The truth is, you can’t be sure, with precisely 100 percent certainty. There are all sorts of preposterously small probability disaster scenarios that we ignore in our everyday lives. It’s conceivable that turning on the LHC will start a chain of events that destroys the earth, but many things are conceivable; what matters is whether they are reasonable, and in this case none of them was.

Fighting against the doomsayers turned out to be good practice for the physics community. The level of public scrutiny given to the search for the Higgs boson is unprecedented. Scientists, who are at their best when discussing abstract and highly technical ideas with other scientists, have had to learn to craft a clear and compelling message for the outside world. In the long run, that can only be good news for science.

Making the sausage

One of the biggest misconceptions many people have about results that come from giant particle physics experiments is about the journey from taking data to announcing a result. It’s not an easy one. In science, the traditional way that results are communicated and made official is through papers published in peer-review journals. That’s certainly true for ATLAS and CMS, but the complexity of the experiments guarantees that essentially the only competent referees are the collaboration members themselves. To deal with this state of affairs, each experiment has set up an extremely rigorous and demanding procedure that must be carried out before new results can be shared with the public.

The thousands of collaborators on the LHC experiments are mostly not employed by CERN. A typical working physicist will be a student, professor, or postdoc (a research position in between the PhD and a faculty job) at a university or laboratory somewhere in the world, although they may spend a substantial portion of their year in Geneva. Most often, the first step toward a publishable paper is that one of these physicists asks a question. It might be a perfectly obvious question: “Is there a Higgs boson?” Or it could be something more speculative: “Is electric charge really conserved?” “Are there more than three generations of fermions?” “Do high-energy particle collisions create miniature black holes?” “Are there extra dimensions of space?” Questions may be inspired by a new theoretical proposal, or an unexplained feature of some existing data, or simply by the new capabilities of the machine itself. Experimentalists are generally down-to-earth people, at least in their capacity as working scientists, so they tend to ask questions that can be addressed by the flood of data the LHC provides.

BOOK: The Particle at the End of the Universe: How the Hunt for the Higgs Boson Leads Us to the Edge of a New World
9.39Mb size Format: txt, pdf, ePub
ads

Other books

Dangerous Boy by Hubbard, Mandy
Tolerance (Heart of Stone) by Sidebottom, D H
Report on Probability A by Brian W. Aldiss
Too Darn Hot by Pamela Burford
Sword by Amy Bai
The Column Racer by Jeffrey Johnson
Ever Shade by Alexia Purdy
Deathly Wind by Keith Moray