Read Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon Online
Authors: Kim Zetter
The parallels between ancient and modern Persia were not hard to draw, in light of current events. In 2005, news reports claimed that Iranian president Mahmoud Ahmadinejad had called for Israel to be wiped off the face of the map. Though subsequent reports determined that his words had been mistranslated, it was no secret that Ahmadinejad wished the modern Jewish state to disappear, just as Haman had wanted his Jewish contemporaries to disappear centuries before.
9
And on February 13, 2010, around the same time that Stuxnet’s creators were preparing a new version
of their attack to launch against machines in Iran, Rav Ovadia Yosef, an influential former chief rabbi of Israel and a political powerhouse, drew a direct line between ancient Persia and modern Iran in a sermon he gave before Purim. Ahmadinejad, he said, was the “Haman of our generation.”
“Today we have a new Haman in Persia, who is threatening us with his nuclear weapons,” Yosef said. But like Haman and his henchmen before, he said, Ahmadinejad and his supporters would find their bows destroyed and their swords turned against them to “strike their own hearts.”
10
None of this, however, was evidence that the “myrtus” in Stuxnet’s driver was a reference to the Book of Esther. Especially when read another way, as some later suggested, myrtus could easily have been interpreted as “my RTUs”—or “my remote terminal units.” RTUs, like PLCs, are industrial control components used to operate and monitor equipment and processes. Given that Stuxnet was targeting Siemens PLCs, it seemed just as possible that this was its real meaning.
11
But who could say for sure?
The Symantec researchers were careful not to draw any conclusions from the data. Instead, in a blog post written by Chien and a colleague, they said simply, “Let the speculation begin.”
12
1
Despite the fact that Conficker spread so rapidly and so successfully, it never really did anything to most of the machines it infected, leaving an enduring mystery about the motives for creating and unleashing it. Some thought the attackers were trying to create a giant botnet of infected machines to distribute spam or conduct denial-of-service (DoS) attacks against websites—a later variant of Conficker was used to scare some users into downloading a rogue antivirus program. Others feared it might install a “logic bomb” on infected systems that would cause data to self-destruct at a future date. But when none of these scenarios materialized, some thought Conficker might have been unleashed as a test to see how governments and the security industry would respond. The attack code morphed over time and used sophisticated methods to remain several steps ahead of researchers to prevent them from stamping out the worm altogether, leading some to believe the attackers were testing defenses. After Stuxnet was discovered, John Bumgarner, chief technology officer for U.S. Cyber Consequences Unit, a consulting firm with primarily government clients, claimed Conficker and Stuxnet were created by the same attackers, and that Conficker was used as a “smokescreen” and a “door kicker” to get Stuxnet onto machines in Iran. As proof, he cited the timing of the two attacks and the fact that Stuxnet used one of the same vulnerabilities Conficker had used to spread. But Symantec and other researchers who examined Stuxnet and Conficker say they found nothing to support Bumgarner’s claim. What’s more, the first version of Conficker avoided infecting any machines in Ukraine, suggesting this may have been its country of origin.
2
Melissa wasn’t the first prolific attack, however. That honor is reserved for the Morris worm, a self-propagating program created by a twenty-three-year-old computer science graduate student named Robert Morris Jr., who was the son of an NSA computer security specialist. Although many of Stuxnet’s methods were entirely modern and unique, it owes its roots to the Morris worm and shares some characteristics with it. Morris unleashed his worm in 1988 on the ARPAnet, a communications network built by the Defense Department’s Advanced Research Projects Agency in the late 1960s, which was the precursor to the internet. Like Stuxnet, the worm did a number of things to hide itself, such as placing its files in memory and deleting parts of itself once they were no longer needed to reduce its footprint on a machine. But also like Stuxnet, the Morris worm had a few flaws that caused it to spread uncontrollably to 60,000 machines and be discovered. Whenever the worm encountered a machine that was already infected, it was supposed to halt the infection and move on. But because Morris was concerned that administrators would kill his worm by programming machines to tell it they were infected when they weren’t, he had the worm infect every seventh machine it encountered anyway. He forgot to take into account the interconnectedness of the ARPAnet, however, and the worm made repeated rounds to the same machines, reinfecting some of them hundreds of times until they collapsed under the weight of multiple versions of the worm running on them at once. Machines at the University of Pennsylvania, for example, were attacked 210 times in twelve hours. Shutting down or rebooting a computer killed the worm, but only temporarily. As long as a machine was connected to the network, it got reinfected by other machines.
3
Self-replicating worms—Conficker and Stuxnet being the exception—are far rarer than they once were, having largely given way to phishing attacks, where malware is delivered via e-mail attachments or through links to malicious websites embedded in e-mail.
4
Once virus wranglers extract the keys and match them to the algorithms, they also write a decryptor program so they can quickly decrypt other blocks of code that use the same algorithm. For example, when they receive new versions of Stuxnet or even other pieces of malware that might be written by the same authors and use the same algorithms, they don’t have to repeat this tedious process of debugging all of the code to find the keys; they can simply run their decryptor on it.
5
In some versions of Stuxnet the attackers had increased the time period to ninety days.
6
Nate Lawson, “Stuxnet Is Embarrassing, Not Amazing,” January 17, 2011, available at
rdist.root.org/2011/01/17/stuxnet-is-embarrassing-not-amazing/#comment-6451
.
7
James P. Farwell and Rafal Rohozinski, “Stuxnet and the Future of Cyber War,”
Survival
53, no. 1 (2011): 25.
8
One method for doing this, as Nate Lawson points out in his blog post, is to take detailed configuration data on the targeted machine and use it to derive a cryptographic hash for a key that unlocks the payload. The key is useless unless the malware encounters a machine with the exact configuration or someone is able to brute-force the key by reproducing all known combinations of configuration data until it achieves the correct one. But the latter can be thwarted by deriving the hash from an extensive selection of configuration data that makes this unfeasible. Stuxnet did a low-rent version of the technique Lawson describes. It used basic configuration data about the hardware it was seeking to trigger a key to unlock its payload, but the key itself wasn’t derived from the configuration data and was independent of it. So once the researchers located the key, they could simply unlock the payload with the key without needing to know the actual configuration. Researchers at Kaspersky Lab did, however, later encounter a piece of malware that used the more sophisticated technique to lock its payload. That payload has never been deciphered as a result. See
this page
.
9
University of Michigan Professor Juan Cole and others pointed out that the Persian language has no such idiom as “wipe off the map,” and that what Ahmadinejad actually said was that he hoped the Jewish/Zionist occupying forces of Jerusalem would collapse and be erased from the pages of history.
10
“Rabbi Yosef: Ahmadinejad a New Haman,” Israel National News, February 14, 2010, available at
israelnationalnews.com/News/Flash.aspx/180521#.UONaAhimWCU
.
11
John Bumgarner, chief technology officer for US Cyber Consequences Unit, supports this interpretation and also says that “guava” in the driver’s file path likely refers to a flow cytometer made by a California firm called Guava Technologies. Flow cytometers are devices used to count and examine microscopic particles and are used, among other things, to measure uranium isotopes. Bumgarner believes they may have been used at Natanz to help scientists gauge the enrichment levels of uranium hexafluoride gas as the U-238 isotopes are separated from the U-235 isotopes that are needed for nuclear reactors and bombs. Guava Technologies makes a flow cytometer called Guava EasyCyte Plus that can be integrated with PLCs to provide operators with real-time data about the level of isotopes in uranium. Flow cytometers are a controlled product and would have to be registered under the Trade Sanctions Reform and Export Enhancement Act of 2000 before being sold to Iran. See John Bumgarner, “A Virus of Biblical Distortions,” December 6, 2013, available at
darkreading.com/attacks-breaches/a-virus-of-biblical-distortions/d/d-id/1141007?
.
12
Patrick Fitzgerald and Eric Chien, “The Hackers Behind Stuxnet,” Symantec, July 21, 2010, available at
symantec.com/connect/blogs/hackers-behind-stuxnet
.
A caravan of black, armor-plated Mercedes sedans sped out of Tehran, heading south toward Natanz at ninety miles an hour. Seated separately in three of the cars were Olli Heinonen; his boss, IAEA director Mohamed ElBaradei; and a third colleague from the agency. It was a crisp winter morning in late February 2003, six months after Alireza Jafarzadeh’s group blew the lid off the covert plant at Natanz, and the inspectors were finally getting their first look at the site. Riding with ElBaradei was an elegant professorial man with white hair and a closely trimmed salt-and-pepper beard: Gholam Reza Aghazadeh, who was vice president of Iran and president of its Atomic Energy Organization.
Two weeks earlier, Iranian president Sayyid Mohammad Khatami had finally acknowledged that Iran was building a uranium enrichment plant at Natanz, confirming what ISIS and others had suspected all along about the facility. Iran was in fact developing a number of facilities for every stage of the fuel-production cycle, the president said in a speech, and Natanz was just one of them. But he insisted that Iran’s nuclear aspirations were purely peaceful.
1
If you had faith, logic, and all the advantages that a
great nation like Iran possessed, you didn’t need weapons of mass destruction, he said. What he didn’t say, however, was why, if Iran had nothing to hide, it was burying the Natanz plant deep underground. If nothing illicit was going on, why fortress it beneath layers of cement and dirt? And why enrich uranium at all if fuel for Iran’s nuclear reactors could be purchased from other countries, as most nations with nuclear reactors have done and as Iran had already done in a contract with Russia? These and other questions were lingering in the minds of the IAEA officials as they drove out to Natanz.
The IAEA had come a long way since its inauguration in 1957, when it was created to promote the peaceful development of nuclear technology. Its other role as nuclear watchdog—to ensure that countries didn’t secretly apply that technology to weapons development—was supposed to be secondary. But in the five decades since the agency’s inception, the latter task had gradually become its most critical, as one nuclear crisis arose after another. Unfortunately, the agency’s ability to fulfill this role was often thwarted by its limited authority to investigate or punish countries that violated their safeguards agreements.
Because the agency had no intelligence arm to investigate suspicious activity on its own, it had to rely on intelligence from the thirty-five member states on its board, like the United States—which made it susceptible to manipulation by these countries—or on whatever information inspectors could glean from their visits to nuclear facilities. But since inspectors only, for the most part, visited sites that were on a country’s declared list of nuclear facilities, this left rogue states free to conduct illicit activity at undeclared ones. Even when armed with evidence that a nation was violating its safeguards agreement, the IAEA could do little to enforce compliance. All it could do was refer the offending nation to the UN Security Council, which could then vote on whether to levy sanctions.
2
These weaknesses became glaringly apparent in 1991 after the end of
the first Gulf War, when inspectors entered postwar Iraq to sort through the rubble and discovered that Saddam Hussein had built an advanced nuclear weapons program under their noses. Prior to the war, the IAEA had certified that Hussein’s cooperation with the agency was “exemplary.”
3
So inspectors were shocked to discover after the war that they had been completely duped. By some estimates, Iraq had been just a year away from having enough fissile material to produce a nuclear bomb and two to three years away from having a full-scale nuclear arsenal.
4
Even more shocking was the realization that the illicit activity had been conducted in rooms and buildings next door to declared facilities the inspectors examined, but under the rules could not inspect spontaneously.
5
Infuriated by Iraq’s duplicity, the IAEA developed a so-called Additional Protocol to augment the safeguards agreement that countries signed. This increased the kinds of activities they had to report to the IAEA and also granted the agency leeway to ask more probing questions, request access to purchasing records for equipment and materials, and more easily inspect sites where illicit activity was suspected to have occurred. There was just one catch. The Protocol applied only to countries that ratified it, and in 2003 when the inspectors visited Natanz, Iran wasn’t one of them. As a result, the inspectors were limited in the kinds of demands they could place on Iran.
6