Read Rise of the Robots: Technology and the Threat of a Jobless Future Online
Authors: Martin Ford
One obvious question regarding these two competing approaches is whether the collaboration model is sustainable. Even at the relatively low wages (for attorneys) commanded by these workers, the automated approach seems far more cost-effective. As to the low quality of these jobs, you might assume that I’ve simply cherry-picked a rather dystopian example. After all, won’t most jobs that involve collaboration with machines put people in control—so that workers supervise the machines and engage in rewarding work, rather than simply acting as gears and cogs in a mechanized process?
The problem with this rather wishful assumption is that the data does not support it. In his 2007 book
Super Crunchers,
Yale University professor Ian Ayres cites study after study showing that algorithmic approaches routinely outperform human experts. When people, rather than computers, are given overall control of the process, the results almost invariably suffer. Even when human experts are given access to the algorithmic results in advance, they
still
produce outcomes that are inferior to the machines acting autonomously. To the extent that people add value to the process, it is better to have them provide specific inputs to the system instead of giving them overall control. As Ayres says, “Evidence is mounting in favor of a different and much more demeaning, dehumanizing mechanism for combining expert and [algorithmic] expertise.”
58
My point here is that while human-machine collaboration jobs will certainly exist, they seem likely to be relatively few in number
*
and often short-lived. In a great many cases, they may also be unrewarding or even dehumanizing. Given this, it seems difficult to justify suggesting that we ought to make a major effort to specifically educate people in ways that will help them land one of these jobs—even if it were possible to pin down exactly what such training might entail. For the most part, this argument strikes me as a way to patch the tires on a very conventional idea (give workers still more vocational training) and keep it rolling for a bit longer. We are ultimately headed for a disruption that will demand a far more dramatic policy response.
S
OME OF THE FIRST JOBS
to fall to white-collar automation are sure to be the entry-level positions taken by new college graduates. As we saw in
Chapter 2
, there is already evidence to suggest that this process is well under way. Between 2003 and 2012, the median income of US college graduates with bachelor’s degrees fell from nearly $52,000 to just over $46,000, measured in 2012 dollars. During the same period, total student loan debt tripled from about $300 billion to $900 billion.
59
Underemployment among recent graduates is rampant, and nearly every college student seemingly knows someone whose degree has led to a career working at a coffee shop. In March 2013,
Canadian economists Paul Beaudry, David A. Green, and Benjamin M. Sand published an academic paper entitled “The Great Reversal in the Demand for Skill and Cognitive Tasks.”
60
That title essentially says it all: the economists found that around the year 2000, overall demand for skilled labor in the United States peaked and then went into precipitous decline. The result is that new college graduates have increasingly been forced into relatively unskilled jobs—often displacing nongraduates in the process.
Even those graduates with degrees in scientific and technical fields have been significantly impacted. As we’ve seen, the information technology job market, in particular, has been transformed by the increased automation associated with the trend toward cloud computing as well as by offshoring. The widely held belief that a degree in engineering or computer science guarantees a job is largely a myth. An April 2013 analysis by the Economic Policy Institute found that at colleges in the United States, the number of new graduates with engineering and computer science degrees exceeds the number of graduates who actually find jobs in these fields by 50 percent. The study concludes that “the supply of graduates is substantially larger than the demand for them in industry.”
61
It is becoming increasingly clear that a great many people will do all the right things in terms of pursuing an advanced education, but nonetheless fail to find a foothold in the economy of the future.
While some of the economists who focus their efforts on sifting through reams of historical data are finally beginning to discern the impact that advancing technology is having on higher-skill jobs, they are typically quite cautious about attempting to project that trend into the future. Researchers working in the field of artificial intelligence are often far less reticent. Noriko Arai, a mathematician with Japan’s National Institute of Informatics, is leading a project to develop a system capable of passing the Tokyo University entrance examination. Arai believes that if a computer can demonstrate the combination of natural language aptitude and analytic skill necessary
to gain entrance to Japan’s highest-ranked university, then it will very likely also be able to eventually perform many of the jobs taken by college graduates. She foresees the possibility of massive job displacement within the next ten to twenty years. One of the primary motivations for her project is to try to quantify the potential impact of artificial intelligence on the job market. Arai worries that 10 to 20 percent of skilled workers replaced by automation would be a “catastrophe” and says she “can’t begin to think what 50 percent would mean.” She then adds that it would be “way beyond a catastrophe and such numbers can’t be ruled out if AI performs well in the future.”
62
The higher-education industry itself has historically been one of the primary employment sectors for highly skilled workers. Especially for those who aspire to a doctoral degree, a typical career path has been to arrive on campus as a college freshman—and then never really leave. In the next chapter we’ll look at how that industry, and a great many careers, may also be on the verge of a massive technological disruption.
*
Stephen Baker’s 2011 book,
Final Jeopardy: Man vs. Machine and the Quest to Know Everything,
offers a detailed account of the fascinating story that ultimately led to IBM’s Watson.
*
In
Jeopardy!
the clues are considered to be answers and the response must be phrased as a question for which the provided answer would be correct.
*
According to Stephen Baker’s 2011 book
Final Jeopardy,
the Watson project leader, David Ferrucci, struggled with intense pain in one of his teeth for months. After multiple visits to dentists and what ultimately proved to be a completely unnecessary root canal, Ferrucci was finally—largely by happenstance—referred to a doctor in a medical specialty unrelated to dentistry, and the problem was solved. The specific condition was also described in a relatively obscure medical journal article. It was not lost on Ferrucci that a machine like Watson might have produced the correct diagnosis almost instantly.
*
This is significantly more advanced than the commonly used statistical technique known as “regression.” With regression (either linear or nonlinear), the form of the equation is set in advance, and the equation’s parameters are optimized so as to fit the data. The Eureqa program, in contrast, is able to independently determine equations of any form using a variety of mathematical components including arithmetic operators, trigonometric and logarithmic functions, constants, etc.
*
In addition to his work in genetic programming, Koza is the inventor of the scratch-off lottery ticket and the originator of the “constitutional workaround” idea to elect US presidents by popular vote by having the states agree to award electoral-college votes based on the country’s overall popular-vote outcome.
*
If you find this type of work appealing but lack the requisite legal training, be sure to check out Amazon’s “Mechanical Turk” service, which offers many similar opportunities. “BinCam,” for example, places cameras in your garbage bin, tracks everything you throw away, and then automatically posts the record to social media. The idea is, apparently, to shame yourself into not wasting food and not forgetting to recycle. As we’ve seen, visual recognition (of types of garbage, in this case) remains a daunting challenge for computers, so people are employed to perform this task. The very fact that this service is economically viable should give you some idea of the wage level for this kind of work.
*
In
Average Is Over,
Tyler Cowen estimates that perhaps 10–15 percent of the American workforce will be well equipped for machine collaboration jobs. I think that in the long run, even that estimate might be optimistic, especially when you consider the impact of offshoring. How many machine collaboration jobs will also be anchored locally? (One exception to my skepticism about machine collaboration jobs may be in health care. As discussed in
Chapter 6
, I think it might eventually be possible to create a new type of medical professional with far less training than a doctor who would work together with an AI-based diagnostic and treatment system. Health care is a special case, however, because doctors require an extraordinary amount of training and there is likely to be a significant shortage of physicians in the future.)
In March 2013, a small group of academics, consisting primarily of English professors and writing instructors, launched an online petition in response to news that essays on standardized tests were to be graded by machines. The petition, entitled “Professionals Against Machine Scoring of Student Essays in High Stakes Assessment,”
1
reflects the group’s argument that algorithmic grading of written essays is, among other things, simplistic, inaccurate, arbitrary, and discriminatory, not to mention that it would be done “by a device that, in fact, cannot read.” Within less than two months, the petition had been signed by nearly four thousand professional educators, as well as public intellectuals, including Noam Chomsky.
Using computers to grade tests is not new, of course; they’ve handled the trivial task of grading multiple-choice tests for years. In that context they are viewed as labor-saving devices. When the algorithms begin to encroach on an area believed to be highly dependent on human skill and judgment, however, many teachers see the technology as a threat. Machine essay grading draws on advanced artificial intelligence techniques; the basic strategy used to evaluate
student essays is quite similar to the methodology behind Google’s online language translation. Machine learning algorithms are first trained using a large number of writing samples that have already been graded by human instructors. The algorithms are then turned loose to score new student essays and are able to do so virtually instantaneously.
The “Professionals Against Machine Scoring” petition is certainly correct in its claim that the machines doing the grading “cannot read.” As we’ve seen in other applications of big data and machine learning, however, that doesn’t matter. Techniques based on the analysis of statistical correlations very often match or even outperform the best efforts of human experts. Indeed, a 2012 analysis by researchers at the University of Akron’s College of Education compared machine grading with the scores awarded by human instructors and found that the technology “achieved virtually identical levels of accuracy, with the software in some cases proving to be more reliable.” The study involved nine companies that offer machine grading solutions and over 16,000 pre-graded student essays from public school in six US states.
2
Les Perelman, a former director of the Massachusetts Institute of Technology’s writing program, is one of the most outspoken critics of machine grading, and one of the primary backers of the 2013 petition opposing the practice. Perelman has, in a number of cases, been able to construct completely nonsensical essays that have tricked the grading algorithms into awarding high scores. It seems to me, however, that if the skill required to put together rubbish designed to fool the software is roughly comparable to the skill needed to write a coherent essay, then this tends to undermine Perelman’s argument that the system could be easily gamed. The real question is whether a student who lacks the ability to write effectively can put one over on the grading software, and the University of Akron study seems to suggest otherwise. Perelman does raise at least one valid concern, however: the prospect that students will be taught to write
specifically to please algorithms that he suggests “disproportionately give students credit for length and loquacious wording.”
3