Rolling Stone Examines The Artificial Intelligence Revolution
Welcome to robot nursery school,” Pieter Abbeel says as he opens the door to the Robot Learning Lab on the seventh floor of a sleek new building on the northern edge of the UC-Berkeley campus. The lab is chaotic: bikes leaning against the wall, a dozen or so grad students in disorganized cubicles, whiteboards covered with indecipherable equations. Abbeel, 38, is a thin, wiry guy, dressed in jeans and a stretched-out T-shirt. He moved to the U.S. from Belgium in 2000 to get a Ph.D. in computer science at Stanford and is now one of the world’s foremost experts in understanding the challenge of teaching robots to think intelligently. But first, he has to teach them to “think” at all. “That’s why we call this nursery school,” he jokes. He introduces me to Brett, a six-foot-tall humanoid robot made by Willow Garage, a high-profile Silicon Valley robotics manufacturer that is now out of business. The lab acquired the robot several years ago to experiment with. Brett, which stands for “Berkeley robot for the elimination of tedious tasks,” is a friendly-looking creature with a big, flat head and widely spaced cameras for eyes, a chunky torso, two arms with grippers for hands and wheels for feet. At the moment, Brett is off-duty and stands in the center of the lab with the mysterious, quiet grace of an unplugged robot. On the floor nearby is a box of toys that Abbeel and the students teach Brett to play with: a wooden hammer, a plastic toy airplane, some giant Lego blocks. Brett is only one of many robots in the lab. In another cubicle, a nameless 18-inch-tall robot hangs from a sling on the back of a chair. Down in the basement is an industrial robot that plays in the equivalent of a robot sandbox for hours every day, just to see what it can teach itself. Across the street in another Berkeley lab, a surgical robot is learning how to stitch up human flesh, while a graduate student teaches drones to pilot themselves intelligently around objects. “We don’t want to have drones crashing into things and falling out of the sky,” Abbeel says. “We’re trying to teach them to see.”
Industrial robots have long been programmed with specific tasks: Move arm six inches to the left, grab module, twist to the right, insert module into PC board. Repeat 300 times each hour. These machines are as dumb as lawn mowers. But in recent years, breakthroughs in machine learning – algorithms that roughly mimic the human brain and allow machines to learn things for themselves – have given computers a remarkable ability to recognize speech and identify visual patterns. Abbeel’s goal is to imbue robots with a kind of general intelligence – a way of understanding the world so they can learn to complete tasks on their own. He has a long way to go. “Robots don’t even have the learning capabilities of a two-year-old,” he says. For example, Brett has learned to do simple tasks, such as tying a knot or folding laundry. Things that are simple for humans, such as recognizing that a crumpled ball of fabric on a table is in fact a towel, are surprisingly difficult for a robot, in part because a robot has no common sense, no memory of earlier attempts at towel-folding and, most important, no concept of what a towel is. All it sees is a wad of color.
In order to get around this problem, Abbeel created a self-teaching method inspired by child-psychology tapes of kids constantly adjusting their approaches when solving tasks. Now, when Brett sorts through laundry, it does a similar thing: grabbing the wadded-up towel with its gripper hands, trying to get a sense of its shape, how to fold it. It sounds primitive, and it is. But then you think about it again: A robot is learning to fold a towel.
All this is spooky, Frankenstein-land stuff. The complexity of tasks that smart machines can perform is increasing at an exponential rate. Where will this ultimately take us? If a robot can learn to fold a towel on its own, will it someday be able to cook you dinner, perform surgery, even conduct a war? Artificial intelligence may well help solve the most complex problems humankind faces, like curing cancer and climate change – but in the near term, it is also likely to empower surveillance, erode privacy and turbocharge telemarketers. Beyond that, larger questions loom: Will machines someday be able to think for themselves, reason through problems, display emotions? No one knows. The rise of smart machines is unlike any other technological revolution because what is ultimately at stake here is the very idea of humanness – we may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species.