Read Manna: Two Visions of Humanity's Future Online
Authors: Marshall Brain
Then
there were all the unemployed people. Between Manna improving
efficiency and forcing out the managers, plus overseas outsourcing
taking out white collar jobs, plus machines like the automated
checkout lines and burger flippers coming on line and so on, there
were plenty of people who were unemployed. Unemployed people went
around all day applying to jobs. But in a sense, that was pointless.
All of the interconnected Manna systems knew every single person in
the job pool. Manna also knew the performance of every single person
who had ever worked in the system. You were in an incredibly bad spot
if you were unemployed.
Then
there were all the people being managed by Manna. They all made
minimum wage. If you were wearing a headset at work you were making
minimum wage and everyone knew it. And everyone knew that if you did
not do what Manna told you to do, as fast as Manna told you to do it,
you would be unemployed and making nothing.
And
then there was everyone else -- the doctors, lawyers, accountants,
office workers, executives, politicians. The executives and
politicians made a ton of money and they were never going to be
wearing headsets. Joe Garcia at Burger-G was making $100 million per
year and flaunting it like a rock star.
And
Manna was starting to move in on some of the white collar work force.
The basic idea was to break every job down into a series of steps
that Manna could manage. No one had ever realized it before, but just
about every job had parts that could be subdivided out.
HMOs
and hospitals, for example, were starting to put headsets on the
doctors and surgeons. It helped lower malpractice problems by making
sure that the surgeon followed every step in a surgical procedure.
The hospitals could also hyper-specialize the surgeons. For example,
one surgeon might do nothing but open the chest for heart surgery.
Another would do the arterial grafts. Another would come in to
inspect the work and close the patient back up. What this then meant,
over time, was that the HMO could train technicians to do the opening
and closing procedures at much lower cost. Eventually, every part of
the subdivided surgery could be performed by a super-specialized
technician. Manna kept every procedure on an exact track that
virtually eliminated errors. Manna would schedule 5 or 10 routine
surgeries at a time. Technicians would do everything, with one actual
surgeon overseeing things and handling any emergencies. They all wore
headsets, and Manna controlled every minute of their working lives.
That
same hyper-specialization approach could apply to lots of white
collar jobs. Lawyers, for example. You could take any routine legal
problem and subdivide it -- uncontested divorces, real estate
transactions, most standard contracts, and so on. It was surprising
where you started to see headsets popping up, and whenever you saw
them you knew that the people were locked in, that they were working
every minute of every day and that wages were falling.
A
decade later I was getting out of school. I had a BA in education and
a master's degree in educational administration. My plan was to teach
in high school for two or three years so that I had experience "in
the trenches", and then move into an administrative or
government position. I was ready to start teaching and I was looking
forward to it. Education was one area that, so far, had been largely
untouched by Manna, so in that sense I was lucky. I was also lucky
that there were jobs available, and I did not have a lot of problems
finding an open position. That was a miracle.
My
graduation year was an important year for me -- I had been working at
Burger-G all through school to make spending money, and now I would
have my first real job free from Manna.
But
it turned out to be a pivotal year for America as a whole. It was a
funny coincidence. My graduation year was the year that computer
vision came of age.
No
one really thought of the Manna software as a robot at all. To them,
Manna was just a computer program running on a PC. When most normal
people thought about robots, they thought about independent,
autonomous, thinking robots like the ones they saw in science fiction
films. C-3PO and R2-D2 were powerful robotic images, and people would
not believe they were looking at a robot until robots looked like
C-3PO.
The
mechanical chassis for a C-3PO type robot had been around since the
turn of the century. Honda did the trailblazing with its ASIMO robot,
and once Honda had proven the concept many other manufacturers
followed Honda's lead. ASIMO could walk up and down stairs, kick a
ball and so on, and it looked completely natural. The problem was
that ASIMO needed a human operator pushing a joystick to tell it what
to do.
The
thing that held robots back was vision. Nearly everything a person
does is aided by vision -- so much so that we take vision completely
for granted. But if you close your eyes and try to do anything, you
realize just how important vision is.
For
example, when you enter a room where the light is dim, you think in
your head, "I need to turn on the lights." You use your
eyes to look on the wall for a light switch. When you find it you use
your eyes to guide your hand to the switch. You then use your eyes to
figure out what kind of switch it is. Is it a toggle switch? A
push-button switch? A dimmer switch with a knob? A dimmer switch with
a slider? None of the above? Once you figure it out, you use your
eyes to guide your fingers to manipulate the switch in the
appropriate way. Or maybe you look at the wall and there is no switch
to be found. Now you start looking for a lamp in the room. Is it a
touch lamp? Or is the switch on the base of the lamp? Maybe the
switch is near the bulb, and you have to push it or twist it or pull
a chain... Your vision guides you every step of the way. It is nearly
impossible to do anything in a complex environment without vision.
And turning on a lamp is a very simple thing. It gets a lot more
complicated when you are trying to run through a forest, ride your
bicycle down a busy a street or find your way to a particular address
in a large subdivision.
Without
vision, robots could not move around or manipulate objects. All of
the other hardware was there. Legs and balance systems to allow
bipedal motion had been in place for decades. Robotic fingers and
hands with very fine motor control were easy to create. AI software
to set goals and make decisions was getting more powerful every day.
Everything was there but the vision system.
You
could see that society was ready for the robots to arrive. The first
real robotic system installed in a human position of trust was in the
airline industry. The terrorist attack on the World Trade Center in
2001 had been a wake-up call. Then there was a run of six airline
accidents, all attributed to pilot or ATC error, which made everyone
nervous. Then the unthinkable happened. Two airline pilots, both
sleeper agents for an Asian terrorist organization, flew their planes
into massive U.S. targets almost simultaneously and killed nearly
50,000 people. One hit a basketball arena full of spectators, and the
other ripped through the Democratic national convention. That was the
end of human pilots in the cockpit.
As
it turned out, the transition to robotic planes was remarkably easy.
Airplanes were already controlled by autopilots while enroute. Radar
systems on the ground and in the planes were already taking off and
landing the planes automatically. An airplane did not need a vision
system -- its "vision" was radar, and radar had been around
for more than half a century. There was also a secondary backup
system that gave airplanes a form of consciousness. Airplanes could
detect their exact location using GPS systems. These GPS systems were
married to very detailed digital maps of the ground and the airspace
over the ground. The maps told the airplane where every single
building and structure was on the ground. So even if the autopilot
failed and told the plane to go somewhere unsafe, a "conscious"
plane would refuse to fly there. It was, quite literally, impossible
for a conscious plane to fly into a building -- the plane "knew"
that flying into a building was "wrong." If the autopilot
went insane, the conscious plane shut it off and radioed for help. If
all the engines failed or fell off, the plane knew what was on the
ground in the vicinity and did its best to crash into an unpopulated
area.
Soon
there were no human airline pilots and no human air traffic
controllers in the system. Everything about flying through the air
was automated. The cockpit was stripped out of airplanes and the
space became a lounge or a seating area. With human beings out of the
loop, the safety record of the airline industry improved and people
came to trust the airlines again. No one cared at all that there was
no human pilot in the cockpit -- people actually trusted machines
more than human beings.
The
first breakthrough in true computer vision came from a university.
The newest video game consoles came out, and these consoles had
extremely powerful CPUs able to process 10 trillion operations per
second. By adding 100 gigabytes of RAM to the console and then
networking 1,000 of these video game consoles together, a university
research team created a machine able to process 10 quadrillion
operations per second on 100 trillion bytes of RAM. They had created
a $500,000 machine with processing power approaching that of a human
brain. With that much processing power and memory on tap, the
researchers were finally able to start creating real vision
processing algorithms.
Within
a year they had two demonstration projects that got a lot of media
attention. The first was an autonomous humanoid robot that, given an
apartment number, could walk through a city, find the building, ride
the elevator or walk up the steps and knock on that apartment door.
The second was a car that could drive itself door-to-door in rush
hour traffic without any human intervention. By combining the walking
robot and the self-driving car, the researchers demonstrated a
completely robotic delivery system for a pizza restaurant. In a
widely reported publicity stunt, the research team ordered a pizza
and had it delivered by robot to their lab 25 minutes later.
A
network of 1,000 video game consoles was not exactly portable, so the
demonstration robots that this research team created did not have the
brain on-board. The robots talked to the system through wireless
connections. However, this research team had proven that machine
vision was possible and workable in some of the most complex and
real-world tasks imaginable.
The
more significant breakthrough came a few years later. Researchers at
a chip company had followed the work of the vision team, and they
realized that the 64-bit floating point operations in the video game
console were not the optimal unit of calculation for a vision
processing machine. Instead, they created a new computer architecture
could handle the problem much more efficiently. This realization made
massively parallel chip designs for vision very easy to manufacture.
The chip company released its first vision processing module -- a 10
petaop custom vision processor – shortly thereafter. The OEM
price for the module was $8,000.
That
module opened the floodgates. Within a year, hundreds of
manufacturers were showing prototype robots. There were delivery
robots, cleaning robots, cooking robots, construction robots, baggage
handling robots, welding robots, landscaping robots, truck-driving
robots, retail robots, taxi robots, security robots, etc.
Take
something as simple as painting a room. You could stick one of the
new painting robots in the room with 5 gallons of paint. Two hours
later the entire room was perfectly painted. You didn't have to cover
the furniture or even move the furniture. The robot did everything,
and the job was perfect. Not one drop of paint was spilled, not one
streak could be seen on the molding. Every line, every corner, every
painted surface was faultless. There were also new robots to frame a
house, side it, stack bricks and put on the roof.
The
automotive industry demonstrated cars with the vision and control
systems built right into the vehicle. The new robotic cars could
drive themselves door to door, drop off the passengers and then drive
down the block to park themselves. It meant you could read or watch
TV on your way to work, and the car did all the driving. There was no
reason to have a "driver's seat" and a steering wheel in
these new vehicles, so the interior of a car became much more
functional -- the front seat could face the back of the car, and it
could fold out into a bed. The automated cars promised to reduce
traffic congestion, dramatically improve highway safety and make the
drive to work much more comfortable. There were also automated taxis
and robotic trucks.
In
the retail and fast food industries, the number of prototype robots
boggled the mind. Robots could empty a customer's cart, scan the tags
on the products and put them into bags. Robots could stock items on
the shelves. Robots could sweep the floors and clean the restrooms.
Within two years, Burger-G was demonstrating and debugging a
completely robotic Burger-G restaurant at the same location where
they had first deployed Manna. Instead of telling human employees
what to do, Manna told the robots what to do.
All
of the hardware and general intelligence for these robots had been in
place for a decade. What was missing was vision. As soon as the
inexpensive vision module became available, the number of robots in
the marketplace exploded.
The
effect that the robotic explosion had on the employment landscape was
startling. Most large retailers began replacing human employees with
robots as fast as they could. The robots stocked the shelves, swept
the floors, helped customers with questions and carried the
customers' purchases out to their cars. Every fast food restaurant
was doing the same thing. Construction sites started to switch to
robots for every repetitive task: framing, siding, roofing, painting,
etc. Robotic cars and trucks took to the highways and accident rates
started to decline. It was easy to see that the completely robotic
airport, amusement park, grocery store and factory were on the way.