In Search of a Robot More Like Us

MENLO PARK, Calif. — The robotics pioneer Rodney Brooks often begins speeches by reaching into his pocket, fiddling with some loose change, finding a quarter, pulling it out and twirling it in his fingers.

The task requires hardly any thought. But as Dr. Brooks points out, training a robot to do it is a vastly harder problem for artificial intelligence researchers than I.B.M.’s celebrated victory on “Jeopardy!” this year with a robot named Watson.

Although robots have made great strides in manufacturing, where tasks are repetitive, they are still no match for humans, who can grasp things and move about effortlessly in the physical world.

Designing a robot to mimic the basic capabilities of motion and perception would be revolutionary, researchers say, with applications stretching from care for the elderly to returning overseas manufacturing operations to the United States (albeit with fewer workers).

Yet the challenges remain immense, far higher than artificial intelligence hurdles like speaking and hearing.

“All these problems where you want to duplicate something biology does, such as perception, touch, planning or grasping, turn out to be hard in fundamental ways,” said Gary Bradski, a vision specialist at Willow Garage, a robot development company based here in Silicon Valley.

“It’s always surprising, because humans can do so much effortlessly.”

Now the Defense Advanced Research Projects Agency, or Darpa, the Pentagon office that helped jump-start the first generation of artificial intelligence research in the 1960s, is underwriting three competing efforts to develop robotic arms and hands one-tenth as expensive as today’s systems, which often cost $100,000 or more.

Last month President Obama traveled to Carnegie Mellon University in Pittsburgh to unveil a $500 million effort to create advanced robotic technologies needed to help bring manufacturing back to the United States. But lower-cost computer-controlled mechanical arms and hands are only the first step.

There is still significant debate about how even to begin to design a machine that might be flexible enough to do many of the things humans do: fold laundry, cook or wash dishes. That will require a breakthrough in software that mimics perception.

Today’s robots can often do one such task in limited circumstances, but researchers describe their skills as “brittle.” They fail if the tiniest change is introduced. Moreover, they must be reprogrammed in a cumbersome fashion to do something else.

Many robotics researchers are pursuing a bottom-up approach, hoping that by training robots on one task at a time, they can build a library of tasks that will ultimately make it possible for robots to begin to mimic humans.

Others are skeptical, saying that truly useful machines await an artificial intelligence breakthrough that yields vastly more flexible perception.

The limits of today’s most sophisticated robots can be seen in a towel-folding demonstration that a group of students at the University of California, Berkeley, posted on the Internet last year: In spooky, anthropomorphic fashion, a robot deftly folds a series of towels, eyeing the corners, smoothing out wrinkles and neatly stacking them in a pile.

It is only when the viewer learns that the video is shown at 50 times normal speed that the meager extent of the robot’s capabilities becomes apparent. (The students acknowledged this spring that they were only now beginning to tackle the further challenges of folding shirts and socks.)

Even the most ambitious and expensive robot arm research has not yet yielded impressive results.

In February, for example, Robonaut 2, a dexterous robot developed in a partnership between NASA and General Motors, was carried aboard a space shuttle mission to be installed on the International Space Station. The developers acknowledged that the software required by the system, which is humanoid-shaped from the torso up, was unfinished and that the robot was sent up then only because a rare launching window was available.

“We’re in a funny chicken-and-egg situation,” Dr. Brooks said. “No one really knows what sensors or perceptual algorithms to use because we don’t have a working hand, and because we don’t have a grasping strategy nobody can figure out what kind of hand to design.”

Dr. Brooks is also tackling the problem: In 2008 he founded Heartland Robotics, a Boston-based company that is intent on building a generation of low-cost robots.

AT YOUR SERVICE A robot programmed at the University of California, Berkeley, folds laundry — very slowly.

And the three competing efforts to develop robotic arms and hands with Darpa financing — at SRI International, Sandia National Laboratories and iRobot — offer some reasons for optimism.

Source | NY Times

Leave a Reply