Machines of Loving Grace Read online




  EPIGRAPH

  I like to think

  (it has to be!)

  of a cybernetic ecology

  where we are free of our labors

  and joined back to nature,

  returned to our mammal

  brothers and sisters,

  and all watched over

  by machines of loving grace.

  —Richard Brautigan,

  “All Watched Over by Machines of Loving Grace”

  CONTENTS

  Epigraph

  Preface

  1 | Between Human and Machine

  2 | A Crash in the Desert

  3 | A Tough Year for the Human Race

  4 | The Rise, Fall, and Resurrection of AI

  5 | Walking Away

  6 | Collaboration

  7 | To the Rescue

  8 | “One Last Thing”

  9 | Masters, Slaves, or Partners?

  Acknowledgments

  Notes

  Index

  About the Author

  Also by John Markoff

  Credits

  Copyright

  About the Publisher

  PREFACE

  In the spring of 2014 I parked in front of the small café adjacent to the Stanford Golf Course. As I got out of my car, a woman pulled her Tesla into the next space, got out, and unloaded her golf cart. She then turned and walked toward the golf course and her golf cart followed her—on its own. I was stunned, but when I feverishly Googled “robot golf carts” I found that there was nothing new about the caddy. The CaddyTrek robot golf cart, which retails at $1,795, is simply one of many luxury items that you might find on a Silicon Valley golf course these days.

  Robots are pervading our daily lives. Cheap sensors, powerful computers, and artificial intelligence software will ensure they will, increasingly, be autonomous. They will assist us and they will replace us. They will transform health care and elder care as they have transformed warfare. Yet despite the fact that these machines have been a part of our literature and cinema for decades, we are ill-prepared for this new world now in the making.

  The idea that led to this book has its roots in the years between 1999 and 2001, when I was conducting a series of interviews that would ultimately become the book What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry. My original research was an exercise in “anti-autobiography.” I grew up in Palo Alto—a city that would become the heart of Silicon Valley—in the 1950s and the first half of the 1960s, but I moved away during a crucial decade when a set of computing and communications technologies combined to lay the foundation for personal computing and the modern Internet. I returned just in time to see the emergence of a computing era that would soon sweep through the entire world, transforming everything it touched. Years later while I was doing research for Dormouse I noted a striking contrast in the intent of the designers of the original interactive computer systems. At the outset of the Information Age two researchers independently set out to invent the future of computing. They established research laboratories roughly equidistant from the Stanford University campus. In 1964 John McCarthy, a mathematician and computer scientist who had coined the term “artificial intelligence,” began designing a set of technologies that were intended to simulate human capabilities, a project he believed could be completed in just a decade. At the same time, on the other side of campus, Douglas Engelbart, who was a dreamer intent on using his expertise to improve the world, believed that computers should be used to “augment” or extend human capabilities, rather than to mimic or replace them. He set out to create a system to permit small groups of knowledge workers to quickly amplify their intellectual powers and work collaboratively. One researcher attempted to replace human beings with intelligent machines, while the other aimed to extend human capabilities. Of course, together, their work defined both a dichotomy and a paradox. The paradox is that the same technologies that extend the intellectual power of humans can displace them as well.

  In this book, I have attempted to capture the ways in which scientists, engineers, and hackers have grappled with questions about the deepening relationship between human and machine. In some cases I discovered that the designers resist thinking deeply about the paradoxical relationship between artificial intelligence and intelligence augmentation. Often, it comes down to a simple matter of economics. There is now a burgeoning demand for robots with abilities that far exceed those of the early industrial robots of the last half century. Even in already highly automated industries like agriculture, a new wave of “ag robots” are now driving tractors and harvesters, irrigating and weeding fields, conducting surveillance from the air, and generally increasing farm productivity.

  There are also many instances where the researchers think deeply about the paradox, and many of those researchers place themselves squarely in Engelbart’s camp. Eric Horvitz, for example, is a Microsoft artificial intelligence researcher, medical doctor, and past president of the Association for the Advancement of Artificial Intelligence, who has for decades worked on systems to extend human capabilities in the office. He has designed elaborate robots that serve as office secretaries, performing tasks like tracking calendars, greeting visitors, and managing interruptions and distractions. He is building machines that will simultaneously augment and displace humans.

  Others, like German-born Sebastian Thrun, an artificial intelligence researcher and a roboticist (also a cofounder at Udacity, an online education company) are building a world that will be full of autonomous machines. As founder of the Google car project, Thrun led the design of autonomous vehicle technology that one day may displace millions of human drivers, something he justifies by citing the lives it will save and the injuries it will avoid.

  The central topic of this book is the dichotomy and the paradox inherent in the work of the designers who alternatively augment and replace humans in the systems they build. This distinction is clearest in the contrasting philosophies of Andy Rubin and Tom Gruber. Rubin was the original architect of Google’s robot empire and Gruber is a key designer of Apple’s Siri intelligent assistant. They are both among Silicon Valley’s best and brightest, and their work builds on that of their predecessors—Rubin mirrors John McCarthy and Gruber follows Doug Engelbart—to alternatively replace or augment humans.

  Today, both robotics and artificial intelligence software increasingly evoke memories of the early days of personal computing. Like the hobbyists who created the personal computer industry, AI designers and roboticists are hugely enthusiastic about the technological advances, new products, and companies clearly ahead of them. At the same time, many of the software designers and robot engineers grow uncomfortable when asked about the potential consequences of their inventions and frequently deflect questions with gallows humor. Yet questions are essential. There’s no blind watchmaker for the evolution of machines. Whether we augment or automate is a design decision that will be made by individual human designers.

  It would be easy to cast one group as heroes and the other as villains, yet the consequences are too nuanced to be easily sorted into black and white categories. Between their twin visions of artificial intelligence and robotics lies a future that might move toward a utopia, a dystopia, or somewhere in between. Is an improved standard of living and relief from drudgery worthwhile if it also means giving up freedom and privacy? Is there a right or a wrong way to design these systems? The answer, I believe, lies with the designers themselves. One group designs powerful machines that allow humans to perform previously unthinkable tasks, like programming robots for space exploration, while the other works to replace humans with machines, like the developers of artificial intelligence software that enables robots to perf
orm the work of doctors and lawyers. It is essential that these two camps find a way to communicate with each other. How we design and interact with our increasingly autonomous machines will determine the nature of our society and our economy. It will increasingly determine every aspect of our modern world, from whether we live in a more or less stratified society to what it will mean to be human.

  The United States is currently in the midst of a renewed debate about the consequences of artificial intelligence and robotics and their impact on both employment and the quality of life. It is a strange time—workplace automation has started to strike the white-collar workforce with the same ferocity that it transformed the factory floor beginning in the 1950s. Yet the return of the “great automation debate” a half century after the initial one feels sometimes like scenes from Rashomon: everyone sees the same story but interprets it in a different, self-serving way. Despite ever-louder warnings about the dire consequences of computerization, the number of Americans in the workforce has continued to grow. Analysts look at the same Bureau of Labor Statistics data and simultaneously predict both the end of work and a new labor renaissance. Whether labor is vanishing or being transformed, it’s clear that this new automation age is having a profound impact on society. Less clear, despite vast amounts both said and written, is whether anyone truly grasps where technological society is headed.

  Although few people encountered the hulking mainframe computers of the 1950s and 1960s, there was a prevailing sense that these machines exerted some sinister measure of control over their lives. Then in the 1970s personal computing arrived and the computer became something much friendlier—because people could touch these computers, they began to feel that they were now in control. Today, an “Internet of Things” is emerging and computers have once again started to “disappear,” this time blending into everyday objects that have as a result acquired seemingly magical powers—our smoke detectors speak and listen to us. Our phones, music players, and tablets have more computing power than the supercomputers of just a few decades ago.

  With the arrival of “ubiquitous computing,” we have entered a new age of smart machines. In the coming years, artificial intelligence and robotics will have an impact on the world more dramatic than the changes personal computing and the Internet have brought in the past three decades. Cars will drive themselves and robots will do the work of FedEx employees and, inevitably, doctors and lawyers. The new era offers the promise of great physical and computing power, but it also reframes the question first raised more than fifty years ago: Will we control these systems or will they control us?

  George Orwell posed the question eloquently. 1984 is remembered for its description of the Surveillance State, but Orwell also wrote about the idea that state control would be exercised by shrinking human spoken and written language to make it more difficult to express, and thus conceive of, alternative ideas. He posited a fictional language, “Newspeak,” that effectively limited freedom of thought and self-expression.

  With the Internet offering millions of channels, at first glance we couldn’t be farther today from Orwell’s nightmare, but in a growing number of cases, smart machines are making decisions for us. If these systems merely offered advice, we could hardly call these interactions “controlling” in an Orwellian sense. However, the much-celebrated world of “big data” has resulted in a vastly different Internet from the one that existed just a decade ago. The Internet has extended the reach of computing and is transforming our culture. This neo-Orwellian society presents a softer form of control. The Internet offers unparalleled new freedoms while paradoxically extending control and surveillance far beyond what Orwell originally conceived. Every footstep and every utterance is now tracked and collected, if not by Big Brother then by a growing array of commercial “Little Brothers.” The Internet has become an intimate technology that touches every facet of our culture. Today our smartphones, laptops, and desktop computers listen to us, supposedly at our command, and cameras gaze from their screens as well, perhaps benignly. The impending Internet of Things is now introducing unobtrusive, always-on, and supposedly helpful countertop robots, like the Amazon Echo and Cynthia Breazeal’s Jibo, to homes across the country.

  Will a world that is watched over by what sixties poet Richard Brautigan described as “machines of loving grace” be a free world? Free, that is, in the sense of “freedom of speech,” rather than “free beer.”1 The best way to answer questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.

  In Silicon Valley it is popular for optimistic technologists to believe that the twin forces of innovation and Moore’s law—the doubling of computing power at two-year intervals—are sufficient to account for technical progress. Little thought is given as to why one technology wins out over others, or why a particular technology arises when it does. This view is anathema to what social scientists call the “social construction of technology”—the understanding that we shape our tools rather than being shaped by them.

  We have centuries of experience with machines such as the backhoe and steam shovel, both of which replace physical labor. Smart machines that displace white-collar workers and intellectual labor, however, are a new phenomenon. More than merely replacing humans, information technology is democratizing certain experiences. It is not just that using a personal computer has made it possible to dispense with a secretary. The Internet and the Web have vastly reduced the costs of journalism, for example, not just upending the newspaper industry but also fundamentally transforming the process of collecting and reporting the news. Similarly technologies like pitch correction have made it possible for anyone to sing on key without training while a variety of computerized music systems allow anyone to become a composer and a musician. In the future, how these systems are designed will foretell either a great renaissance or possibly something darker—a world in which human skills are passed on wholesale to machines. McCarthy’s and Engelbart’s work defined a new era in which digital computers would transform economies and societies as profoundly as did the industrial revolution.

  Recent experiments that guaranteed a “basic income” in the poorest part of the world may also offer a profound insight into the future of work in the face of encroaching, brilliant machines. The results of these experiments were striking because they ran counter to the popular idea that economic security undercuts the will to work. An experiment in an impoverished village in India in 2013 guaranteeing basic needs had just the opposite effect. The poor did not rest easy on their government subsidies; instead, they became more responsible and productive. It is quite likely that we will soon have the opportunity to conduct a parallel experiment in the First World. The idea of a basic income is already on the political agenda in Europe. Raised by the Nixon administration in the form of a negative income tax in 1969, the idea is currently not politically acceptable in the United States. However, that will change quickly if technological unemployment becomes widespread.

  What will happen if our labor is no longer needed? If jobs for warehouse workers, garbage collectors, doctors, lawyers, and journalists are displaced by technology? It is of course impossible to know this future, but I suspect society will find that humans are hardwired to work or find an equivalent way to produce something of value in the future. A new economy will create jobs that we are unable to conceive of today. Science-fiction writers, of course, have already covered this ground well. Read John Barnes’s Mother of Storms or Charlie Stross’s Accelerando for a compelling window into what a future economy might look like. The simple answer is that human creativity is limitless, and if our basic needs are looked after by robots and AIs, we will find ways to entertain, educate, and care for one another in new ways. The answers may be murky but the questions are increasingly sharp. Will these intelligent machines that interact with and care for us be our allies or will they enslave us?

  In the pages that follow I portray a diverse set of computer
scientists, hackers, roboticists, and neuroscientists. They share a growing sense that we are approaching an inflection point where humans will live in a world of machines that mimic, and even surpass, some human capabilities. They offer a rainbow of sensibilities about our place in this new world.

  During the first half of this century, society will be tasked with making hard decisions about the smart machines that have the potential to be our servants, partners, or masters. At the very dawn of the computer era in the middle of the last century, Norbert Wiener issued a warning about the potential of automation: “We can be humble and live a good life with the aid of the machines,” he wrote, “or we can be arrogant and die.”

  It is still a fair warning.

  John Markoff

  San Francisco, California

  January 2015

  1|BETWEEN HUMAN AND MACHINE

  Bill Duvall was already a computer hacker when he dropped out of college. Not long afterward he found himself face-to-face with Shakey, a six-foot-tall wheeled robot. Shakey would have its moment in the sun in 1970 when Life magazine dubbed it the first “electronic person.” As a robot, Shakey fell more into the R2-D2 category of mobile robots than the more humanoid C-3PO of Star Wars lore. It was basically a stack of electronic gear equipped with sensors and motorized wheels, first tethered, then later wirelessly connected to a nearby mainframe computer.

  Shakey wasn’t the world’s first mobile robot, but it was the first one that was designed to be truly autonomous. An early experiment in artificial intelligence (AI), Shakey was intended to reason about the world around it, plan its own actions, and perform tasks. It could find and push objects and move around in a planned way in its highly structured world. Moreover, as a harbinger of things to come, it was a prototype for much more ambitious machines that were intended to live, in military parlance, in “a hostile environment.”

  Although the project has now largely been forgotten, the Shakey designers pioneered computing technologies today used by more than one billion people. The mapping software in everything from cars to smartphones is based on techniques that were first developed by the Shakey team. Their A* algorithm is the best-known way to find the shortest path between two locations. Toward the end of the project, speech control was added as a research task, and today Apple’s Siri speech service is a distant descendant of the machine that began life as a stack of rolling actuators and sensors.