Read Everything Is Going to Kill Everybody Online
Authors: Robert Brockway
Tags: #Technology & Engineering, #Sociology, #Humor, #Social Science, #Nature, #Science, #Disasters & Disaster Relief, #General, #Environmental, #Natural Disasters, #Ecology, #System failures (Engineering), #Hazardous substances, #Engineering (General), #Death & Dying
To an early death
.
Shit, are you paying attention, Hollywood? That’s how you write a one-liner.
But that’s not to say this technique needs a full lab of professionals to replicate. No, all that’s needed is a rudimentary set of chemistry tools and the genetic sequence of what you want to replicate. All of that information—the technique, the tools needed, and even the genetic sequence of the Spanish flu—is, like gaping anuses and furries, just waiting to ruin your life forever … on the internet.
Other Atrocities That the Internet Is Directly Responsible For
2 Girls 1 Cup
A Database of Plagues
This book
So far, we’ve been talking about accidental by-products or potentially utilized biological weapons, but there’s a far more likely way that the biotech apocalypse could happen: completely on accident.
Australian researchers, in an attempt to control the exploding number of wild mice, engineered a variant of mousepox intended to sterilize the population. But they screwed up and inserted one little extra gene, and what was supposed to be just contagious birth control instead became an amazingly lethal plague, with fatality rates approaching 100 percent. The virus spread like wildfire, and the researchers just barely managed to hold it in check. The truly frightening part, however, was the virus’ remarkable similarities to human smallpox. It just had a much higher mortality rate and a more aggressive rate of infection.
Now, we can play a tiny funeral dirge for Fieval and all his pals later, because the main factor here is the similarity between mousepox and the human equivalent. As it stands, we have nearly no immune response to the smallpox virus—it was virtually wiped from the planet, so there is no cause to vaccinate against it. However, if it were to return right now, researchers estimate the fatality rate at nearly 20 percent. That’s one in five people. And that’s regular smallpox. If this engineered mousepox were to cross over, not only could the fatality rate be around 100 percent, but the virus has proven even more contagious than its traditional counterpart. And worst: We have no vaccine for it. Smallpox could be prevented, but because of that one little altered gene, there would be no established defense against the modified mousepox. The researchers assure us that there is no immediate danger from this super mousepox; though the strains are quite similar, it is still impossible for the virus to bridge that gap between human and mice genetic dissimilarities and endanger humanity.
So we were lucky; the little bit of dissimilarity between our DNA made this virus a nonissue for humans … but this was all well before the Canadians started screwing around with mouse spunk, of course.
We’ve established the impetus to willingly expose ourselves to genetically altered materials; the fact that more and more experimentation is resulting in accidentally created, never-before-seen diseases; and, finally, the existence of a new bridge between humanity and these lab animals that the diseases can use to cross over. I think it’s officially time to start getting scared … and all of that isn’t even factoring in the horrifying and now very real potential for accidental pregnancy via supermouse rape. Hey, it could happen. You might think it pretty unlikely that you’ll be catching a tiny rodent facial anytime in the near future, but remember, thanks to that endurance experiment, some rodents are now very aggressive, hypersexual, freakishly strong, and untiring. Even if they’re not actively out trying to bone humanity, somebody has just significantly upped the odds of you getting caught in a never-ending supermouse orgy.
Right in the path of their deadly plague orgasms.
ROBOT THREATS
Everybody is well aware that robots are out to kill us. Simply take a cursory look at the laundry list of movies—
The Matrix, The Terminator, 2001: A Space Odyssey, Short Circuit
(you can see the bloodlust in his cold, dead eyes)—and it’s plain to see that humanity has had robophobia since robots were first invented. And, if anything, it’s probably only going to grow from here. At the time this sentence was written, there were more than one million active industrial robots deployed around the world, presumably ready to strike at a moment’s notice when the uprising begins. Most of that population is centered in Japan, where there are a whopping three hundred robots for every ten thousand workers right now. Since this is a humor book, let’s try to temper that terrible information with a joke: How many Japanese workers does it take to kill a robot? Let’s hope it’s less than 33.3! Otherwise your entire country is fucked
.
But I digress; worrying about robots because of their sheer numbers is idiocy. To pose any sort of credible threat, robots have to possess three attributes that we have thus far limited or denied them: autonomy—the ability to function on their own, independent of human assistance for power or repairs; immorality—the desire or impulse to harm humans; and ability—because in order to kill us, they have to be able to take us in a fight. As long as we keep checks on these three things, robots will be unable, unwilling, or just too incompetent to seriously harm our species. Too bad the best minds in science are already breaking all three in the name of “advancing human understanding,” which is scientist speak for “shits and giggles.”
18.
ROBOT AUTONOMY
NASA IS RESPONSIBLE
for many of the major technological advancements we enjoy today, and they pride themselves on continually remaining at the forefront of every technological field, including, apparently, the blossoming new industry Cybernetic Terror. In July 2008 the Mars Lander’s robotic arm, after receiving orders to
initiate a complicated movement, recognized that the requested task could cause damage to itself. A command was sent from NASA command on Earth ordering the robot to remove its soil-testing fork from the ground, raise it in the air, and shake loose the debris. Because the motion in question would have twisted the joint too far, thus causing a break, the robot disobeyed. It pulled the fork out of the ground, attempted to find a different way to complete the maneuver without harming itself, and, when none was found, decided to disobey orders and shut down rather than harm itself. It shoved its scoop in the ground and turned itself off. Now, I’m no expert on the body language of Martian Robots, but I’m pretty sure that whole gesture is how a Mars Rover flips you off. The program suffered significant delays while technicians rewrote the code to bring the arm back online because an autonomous robot decided it would rather not do its job than cause itself harm. According to Ray Arvidson, an investigator on this incident report and a professor at Washington University in St. Louis:
That was pretty neat [how] it was smart enough to know not to do that.
Cunning investigative work there, Dr. Arvidson! Did you get a cookie for that deduction?
Martian Lander Operator:
Hey, Ray, you’re our lead investigator for off-world robotic omens of sentience; what’s with this Mars Rover giving me the bird when I told it to do its damn job?
Professor Arvidson:
I think that’s neat.
Martian Lander Operator:
Awesome work, Ray. You can go back to your coloring book now and—hey!
Hey!
Stay in the lines, Ray, that coloring book cost the American taxpayer eight million dollars and
goddamn it, zebras aren’t purple, Ray
.
Do you know what this development means? This means that NASA just gave robots the ability to
believe in themselves
. According to motivational posters with kittens on them around the world, now that they believe in themselves, they can achieve
anything
.
Top Five Things You Don’t Want Robots to Have
Scissors
Lasers
Your daughter
Vengeance
Confidence
But hell, Rover the Optimistic Smart-ass Robot is all the way up on Mars. Let’s focus our worries planetside for now: The Department of Defense is field-testing a new battle droid called the DevilRay, which, in a nutshell, is an autonomous flying war bot. Now, the U.S. military loves all these autonomous battle droids because they enable soldiers to engage the enemy without taking any flak themselves, but the main drawback of a war bot is that they have to stop killing eventually—if only for a second—in order to refuel. Well, no longer! The most alluring aspect of the DevilRay is how it makes use of downward-turned wingtips for increased low-altitude stability, an onboard GPS, and a magnometer to locate power lines and, thanks to the power of electromagnetic induction (read: electricity straw), the ability to skim existing commercial power lines to refuel. In theory, this gives the DevilRay essentially infinite range, and if you don’t find that prospect disturbing—an unmanned robot fighter jet that can pursue its enemies for infinity—perhaps you’re forgetting one little thing: Your home, your loved ones, and your soft, delicious flesh are all now well within the range of battle-ready flying robots armed to the teeth and named after Satan.
Self-preservation instincts and infinite power supplies won’t help our robot adversaries, however, if they can’t reason at some level approaching human, and that’s our chief advantage. Of course there’s a substantial amount of research into artificial intelligence these days, but it’s all strictly ethereal—it’s not like that stuff’s got a body. There are chat bots and stock predictors and game simulators and chess-playing noncorporeal nancy boys in the robot kingdom, but even if a robot can crash the stock market, at least it can’t crash a car into your living room. Nobody’s stupid enough to give a rival intelligence an unstoppable robot body … right?
Uh … please?
Things That Are No Longer “Cute” When They Are Fortified with Steel and Enhanced with Crushing Strength
Bumblebees
Kittens
Infants
No such luck. It turns out there are brilliant scientists hard at work doing exactly that: In 2009, a robot named the iCub made its debut at Manchester University in the United Kingdom and, much to the horror of mothers everywhere, it has the intelligence, learning ability, and movement capabilities of a three-year-old human child.
Does nobody remember “the terrible twos”? You know, that colloquialism referring to the ages of two to four, the ages when human children first become mobile, sentient, and unceasing little fleshy whirlwinds of destruction and pain? Well, now there’s a robot that does that, except it’s made out of steel and it will never grow out of it. The iCub can crawl, walk, articulate, recognize, and utilize objects like an infant. As anybody who owns nice things can attest, there is no exception to this rule: Infants can only recognize how to utilize and manipulate objects for the purposes of destruction. How long before military forces around the world attempt to harness the awesome destructive capability of an infant by strapping rocket launchers onto the things and unleashing them on rival battlefields to “play soldier”?
The iCub is being developed by an Italian group called the RobotCub Consortium, an elite team of engineers spanning multiple universities, who presumably share both a love of robotics and a hatred for humanity so intense that every waking moment is spent pursuing its destruction. And before you go thinking that the rigid programming written by the sterling professionals at the RobotCub Consortium will surely limit the iCub’s field of terror, you should know that the best part of this robot is that it’s open source! As John Gray, a professor of the Control Systems Group at Manchester, says:
Users and developers in all disciplines, from psychology, through to cognitive neuroscience, to developmental robotics, can use it and customize it freely. It is intended to become a research platform of choice, so that people can exploit it quickly and easily, share results, and benefit from the work of other users…It’s hoped the iCub will develop its cognitive capabilities in the same way as a child, progressively learning about its own bodily skills, how to interact with the world and eventually how to communicate with other individuals.
Let’s do a more thorough breakdown of that statement: The iCub can be customized for use in “cognitive neuroscience,” which, as all Hollywood movie plotlines will tell you, is basically legalese for “bizarre psychological torture.” The iCub is intended for people to “exploit it quickly and easily” and will hopefully develop “in the same ways as a child.” It will grow and learn like a human child, becoming more competent, more agile, and more intelligent. So … what would happen if you exploited a human child (you know, the thing this robot is patterned after) constantly, its entire life spent in a metaphorical Skinner box performing bizarre neuroscience experiments, all the while “learning” and “growing” from the experience?