Read Black Code: Inside the Battle for Cyberspace Online
Authors: Ronald J. Deibert
Tags: #Social Science, #True Crime, #Computers, #Nonfiction, #Cybercrime, #Security, #Retail
Universities have a special role to play as stewards of an open and secure cyberspace as it was from “the University” that the Internet was born, and from which its guiding principles of peer review and transparency were founded. Protected by academic freedom, equipped with advanced research resources that span the social and natural sciences, and distributed across the planet, university-based research networks could be the ultimate custodians of cyberspace.
Finally, stewardship in this realm requires an attitudinal shift among users as to how they approach cyberspace. For most of us, it is William Gibson’s “consensual hallucination” – always on, always working, 24/7, like running water. This attitude shift will not be easy. There are considerable disincentives for average people to “lift the lid” on the technology. While we are given extraordinary powers of creativity with cyberspace, walled gardens restrict what we can actually do with it. Busting down these walls has to be at the heart of every citizen’s approach to cyberspace. We don’t all need to learn computer code, but we do need to move beyond sending emails or tweets out into the ether without understanding with whom, beyond the immediate recipient, they are shared and under what circumstances.
We are at a crossroads. Mounting cyber threats and an escalating arms race are compelling politicians to take urgent action. In the face of these concerns, those who care about liberal democracy on a global scale must begin to articulate a compelling counter-narrative to reflexive state and corporate control over cyberspace. To be sure, distributed security and stewardship are not panaceas. They will not cease the exercise of power and competitive advantage in cyberspace. They will not bring malicious networks to their
knees, or prevent cutthroat entrepreneurs from exploiting the domain. But, as a vision of ethical behaviour in cyberspace, they will raise the bar, set standards, and challenge the players to justify their acts in more than self-interested terms. Above all, they will focus collective attention on how best to sustain a common communications environment on a planetary scale in an increasingly compressed political space.
Decisions made today could take us down a path where cyberspace continues to evolve into a global commons that empowers individuals through access to information and freedom of speech and association, or they could take us in the opposite direction. Developing models of cyber security that deal with the dark side, while preserving our highest aspirations as citizens, is our most urgent imperative.
People often ask me
what the inspiration was for the Citizen Lab. Admittedly, doing what we do – a kind of
X-Files
meets academia – is highly unusual. But it has been no accident.
Although there have been many formative experiences along the way, one of the most important was an opportunity I had as a graduate student in the 1990s, when I was seconded to the Canadian Ministry of Foreign Affairs as a consultant for an obscure agency called the Verification Research Unit (VRU) headed by a retired Canadian Air Force colonel, Ron Cleminson. Run like a private fiefdom by the iconoclastic veteran, the VRU engaged in groundbreaking studies on arms control, particularly the often troubling question of how to verify whether parties to an arms control agreement were playing by the rules or cheating. Interested in technology and international security as a graduate student, I was contracted by Cleminson to explore how the then emerging commercial market for satellite reconnaissance technology could assist in the verification of arms control agreements.
My VRU experience suggested the potential of revolutionary changes in information and communications technologies to have a major impact on international security. New satellites were being launched by the governments of France, Canada, and other states that only a few years prior would have been the most guarded secrets of the intelligence community, but now imagery from them was being shown to the general public and offered for sale.
The implications of all of this hit me shortly after the Gulf War in the early 1990s. Taken aside by a member of the VRU to a locked, windowless room, I was shown highly sophisticated spy-satellite imagery of a couple of scared Iraqis frantically burying drums in the desert. Laid on the desk before me were high-resolution images taken from a KH-11 U.S. spy satellite, orbiting the earth in syn-chronicity with the path of the sun so that the surface illumination was nearly the same in every picture. Familiar today to viewers of movies like the
Bourne
series, the imagery was astoundingly sharp – a ground resolution of six centimetres – so sharp that I could clearly make out the expressions on the Iraqis’ faces. At the time, these images were highly classified, and I did not have clearance to see them.
Looking today at my iPhone’s RunKeeper app, which tracks my jogging route down to the level of metres in real time, that moment in the VRU office seems so quaint. How soon, I wondered, given current technological trajectories, would KH-11 imagery be available to the entire world? How long could it remain in the shadows?
While at the VRU I attended meetings, workshops, and conferences that involved fascinating applied policy work, much of it highly interdisciplinary. Nuclear, chemical, and biological engineers worked alongside policy analysts and lawyers; government officials, private sector representatives, and people from academia, all with vast but very different experiences, collaborated on international security projects. In the mid-1990s – the World Wide Web barely off the ground and cyber security on pretty much no one’s mind – I attended a conference organized by Cleminson with the prescient title “Space and Cyberspace: Prospects for Arms Control.” In attendance, an extraordinary cast: an analyst who had handed John F. Kennedy the overhead imagery from the Cuban missile crisis in 1962; a scientist working at Sandia National Labs tracking down the Aum Shinrikyo cult, the Japanese terrorist group that had dumped
Sarin nerve gas in the Tokyo subway and who some suspected had purchased property in Australia to test a primitive nuclear device; a technician working on Canada’s RADARSAT satellite, whose synthetic aperture radar imaging could peer through clouds and darkness from space to resolve objects on the surface of the earth.
A major inspiration that would later inform the Citizen Lab’s “mixed methods” approach came via my experiences researching the technical work around the Comprehensive Test Ban Treaty (CTBT) negotiations, which at that time were occurring through the venue of the United Nations Conference on Disarmament. The process involved nuclear, radiological, chemical, seismic, and imagery specialists from about a dozen countries whose mission was to provide a blueprint for a planetwide surveillance network to verify compliance to a possible CTBT, then under negotiation. The process was highly politicized – with the United States and its allies continuously trying to stall negotiations, in my view – and by the time I dropped into the process, the scientists had been meeting for years, knew each other as close friends. Their plans for total Earth surveillance were so airtight that, as one participant joked, “if an ant farted anywhere on earth, we’d know about it.” The architecture for the CTBT verification system included a worldwide network of seismic sensors; radionuclide sniffing stations that would suck up the air and detect the slightest wisp of anything nuclear; space-based radar, optical, and infrared satellites; and even underwater hydro-acoustic sensors, to capture nuclear tests that might be conducted in the ocean’s depths. Though CTBT has never received enough state ratifications to enter into force, the image of a worldwide network of sensors combining various technological platforms, from undersea to outer space, all meant to check and constrain cheating around nuclear testing and build confidence and security for the planet, stuck with me deeply and still influences how I think global cyber security should be implemented.
When the Citizen Lab was founded in 2001, I had in mind a similar image, a planetary network with data collected by researchers and field investigators, this time all related to cyberspace openness and security. When we started we were the only game in town, but over time we built up a network of collaborations with individuals and other university research centres that continues to grow.
Now, more than ten years later, the situation has changed substantially. I have just received an invitation from Harvard’s Jonathan Zittrain (also one of the founders of OpenNet Initiative) to attend a preliminary planning meeting for something he is calling the Internet Health Organization (IHO). His vision for the IHO is similar to my own: a distributed network of research centres monitoring the health of the Internet using a variety of methods and approaches. Included in the preliminary meeting are numerous groups who have undertaken highly imaginative and constructive projects in this broad area: Herdict, a project that collects and disseminates real-time, crowd-sourced information about Internet filtering, denial of service attacks, and other blockages; M-Lab, an “open, distributed server platform for researchers to deploy Internet measurement tools”; and StopBadware, which “aims to make the Web safer through the prevention, mitigation, and remediation of badware websites,” among others.
Just after Zittrain’s invitation came another, this time from the European Commission, which was planning a meeting to discuss the development of a “European Capability for Situational Awareness” platform. According to their invitation, the aim is to gather “reliable and real-time or almost real-time information concerning human rights violations and/or restrictions of fundamental freedoms in connection with the digital environment,” and to determine “what is happening in the Net, in terms of network connectivity and traffic alterations or restrictions.”
Projects like these, and numerous others sprouting up around the globe, show that the mission of the Citizen Lab is resonating with others, and that we are not alone. Will these collective efforts have an impact? Will they be enough to ensure cyberspace remains an open and secure commons of information that helps citizens reach their highest aspirations in this increasingly interconnected and constrained political space?
Just as I am about to send my manuscript to the publisher, a major news story breaks: Syria pulls the plug on the Internet. An announcement on Syrian state TV says that “maintenance technicians are working to fix the problems,” but many suspect the drastic measure is a prelude to a major armed assault on the opposition. The Syrian Internet shutdown comes only a few days before a major meeting in Dubai of the International Telecommunication Union, which has stoked fears about the growing role of states and the UN in Internet governance. The two are not unrelated: the forces moving us towards enclosure, secrecy, and an increasingly dangerous arms race are powerful and grow daily. Sometimes it seems futile to resist them.
In hindsight, the organizers of that May 2012 Calgary conference may have been onto something with their title, “Nobody Knows Anything.” We do know an awful lot these days, with data exploding all around us and information at our fingertips as never before. But the fact remains that nobody really knows where the dark forces in cyberspace are driving us, and whether they can be tamed. We can only keep probing beneath the surface, lifting the lid, and trying to get a handle on this domain that we have created, remembering that cyberspace is, after all, what we together make of it.
Portions of
Black Code
have been inspired by or drawn from previous publications, including “Contesting Cyberspace and the Coming Crisis of Authority” (with Rafal Rohozinski) in Ronald Deibert, John Palfrey, Rafal Rohozinski, and Jonathan Zittrain (eds.)
Access Controlled: The Shaping of Power, Rights and Rule in Cyberspace
(Cambridge: MIT Press, 2010); “Meet Koobface, Facebook’s Evil Doppelgänger,” (with Rohozinski),
Globe and Mail
(November 12, 2010); “Access Contested: Toward the Fourth Phase of Internet Controls,” (with Palfrey, Rohozinski, and Zittrain), in
Access Contested: Security, Identity, and Resistance in Asian Cyberspace
(Cambridge: MIT Press, 2011); “Liberation vs Control: The Future of Cyberspace,” (with Rohozinski),
Journal of Democracy
, 24, 1 (October 2010), pp. 43–57; “The Growing Dark Side of Cyberspace (… and What To Do About It),”
Penn State Journal of Law & International Affairs
(volume 1, no. 2, 2012).
1
CSEC, Canada’s version of the U.S. National Security Agency:
Communications Security Establishment Canada’s (CSEC) mandate was updated under Canada’s Anti-terrorism Act of December 2001. The Act stipulates that CSEC collect information from “the global information infrastructure” about the “capabilities, intentions, or activities of a foreign individual, state, organization, or terrorist group, as they relate to international affairs, defence, or security.” A second part of its mandate focuses on security of information infrastructures in Canada, while a
third specifies CSEC should assist federal law enforcement and security agencies “in performance of their lawful duties.” Details are in Anti-terrorism Act, SC 2001, c. 41, s. 102, codified as National Defence Act, RSC 1985, c.N-5, s, 273.61–273.7.
CSEC is Canada’s partner in the so-called Five Eyes alliance of signals intelligence agencies that includes the United States (National Security Agency), the United Kingdom (Government Communications Headquarters), Australia (Defence Signals Directorate), and New Zealand (Government Communications Security Bureau). See Martin Rudner, “Canada’s Communications Security Establishment, Signals Intelligence and Counter-terrorism,”
Intelligence and National Security
(2007); James Bamford,
Body of Secrets: Anatomy of the Ultra-Secret National Security Agency
(New York: Anchor Books, 2002); and Jeremy Littlewood, “Accountability of the Canadian Security Intelligence Community Post 9/11 : Still a Long and Winding Road?” in ed. Daniel Baldino,
Democratic Oversight of Intelligence Services
(Annandale, NSW: Federation Press, 2010).