Archive for the ‘Robots’ Category
A robot that can control both its own arm and a person’s arm to manipulate objects in a collaborative manner has been developed by Montpellier Laboratory of Informatics, Robotics, and Microelectronics (LIRMM) researchers, IEEE Spectrum Automation reports.
The robot controls the human limb by sending small electrical currents to electrodes taped to the person’s forearm and biceps, which allows the robot to command the elbow and hand to move. In the experiment, the person holds a ball, and the robot holds a hoop; the robot, a small humanoid, has to coordinate the movement of both human and robot arms to successfully drop the ball through the hoop.
The researchers say their goal is to develop robotic technologies that can help people suffering from paralysis and other disabilities to regain some of their motor skills.
Source | Kurzweil AI
Four Wave Gliders — self propelled robots, each about the size of a dolphin — left San Francisco on Nov. 17 for a 60,000 kilometer journey, IEEE Spectrum Automation reports.
Built by Liquid Robotics, the robots will use waves to power their propulsion systems and the Sun to power the sensors, as a capability demonstration. They will be measuring things like water salinity, temperature, clarity, and oxygen content; collecting weather data, and gathering information on wave features and currents.
The data from the fleet of robots is being streamed via the Iridium satellite network and made freely available on Google Earth’s Ocean Showcase.
Source | Kurzweil AI
Since the 1930s, the theory of computation has profoundly influenced philosophical thinking about topics such as the theory of the mind, the nature of mathematical knowledge and the prospect of machine intelligence. In fact, it’s hard to think of an idea that has had a bigger impact on philosophy.
And yet there is an even bigger philosophical revolution waiting in the wings. The theory of computing is a philosophical minnow compared to the potential of another theory that is currently dominating thinking about computation.
At least, this is the view of Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology. Today, he puts forward a persuasive argument that computational complexity theory will transform philosophical thinking about a range of topics such as the nature of mathematical knowledge, the foundations of quantum mechanics and the problem of artificial intelligence.
Computational complexity theory is concerned with the question of how the resources needed to solve a problem scale with some measure of the problem size, call it n. There are essentially two answers. Either the problem scales reasonably slowly, like n, n^2 or some other polynomial function of n. Or it scales unreasonably quickly, like 2^n, 10000^n or some other exponential function of n.
So while the theory of computing can tell us whether something is computable or not, computational complexity theory tells us whether it can be achieved in a few seconds or whether it’ll take longer than the lifetime of the Universe.
That’s hugely significant. As Aaronson puts it: “Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number.”
He goes on to say that it’s easy to imagine that once we know whether something is computable or not, the problem of how long it takes is merely one of engineering rather than philosophy. But he then goes on to show how the ideas behind computational complexity can extend philosophical thinking in many areas.
Take the problem of artificial intelligence and the question of whether computers can ever think like humans. Roger Penrose famously argues that they can’t in his book The Emperor’s New Mind. He says that whatever a computer can do using fixed formal rules, it will never be able to ‘see’ the consistency of its own rules. Humans, on the other hand, can see this consistency.
One way to measure the difference between a human and computer is with a Turing test. The idea is that if we cannot tell the difference between the responses given by a computer and a human, then there is no measurable difference.
But imagine a computer that records all conversations it hears between humans. Over time, this computer will build up a considerable database that it can use to make conversation. If it is asked a question, it looks up the question in its database and reproduces the answer given by a real human.
In this way a computer with a big enough look up table can always have a conversation that is essentially indistinguishable from one that humans would have
“So if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory,” says Aaronson.
Instead, a more fruitful way forward is to think about the computational complexity of the problem. He points out that while the database (or look up table) approach “works,” it requires computational resources that grow exponentially with the length of the conversation.
Aaronson points out that this leads to a powerful new way to think about the problem of AI. He says that Penrose could say that even though the look up table approach is possible in principle, it is effectively impractical because of the huge computational resources it requires.
By this argument, the difference between humans and machines is essentially one of computational complexity.
That’s an interesting new line of thought and just one of many that Aaronson explores in detail in this essay.
Of course, he acknowledges the limitations of computational complexity theory. Many of the fundamental tenets of the theory, such as P ≠ NP, are unproven; and many of the ideas only apply to serial, deterministic Turing machines, rather than the messier kind of computing that occurs in nature.
But he says these criticisms do not allow philosophers (or anybody else) to arbitrarily dismiss the arguments of complexity theory. Indeed, many of these criticisms raise interesting philosophical questions in themselves.
Computational complexity theory is a relatively new discipline which builds on advances made in the 70s, 80s and 90s. And that’s why it’s biggest impacts are yet to come.
Aaronson points us in the direction of some of them in an essay that is thought provoking, entertaining and highly readable. If you have an hour or two to spare, it’s worth a read.
Source | Technology Review
Students are challenged to design “climber-bots” that can crawl, grab, and lift themselves straight up an eight-foot tall vertical wall. Once at the top, robots have to grab onto a “zip-line” and ride down to the finish line to victory — all without any remote controls or human interference.
The robots are all less than one-foot tall and will scale the vertical climbing wall using only the holes and slots cut into the wall. Robots have a limited time to make the climb, and run the risk of dropping from the wall and falling straight to the floor.
Full rules for this year’s competition can be found here.
Source | Kurzweil AI
SHENZHEN, July 29 (Xinhua) — Taiwanese technology giant Foxconn will replace some of its workers with 1 million robots in three years to cut rising labor expenses and improve efficiency, said Terry Gou, founder and chairman of the company, late Friday.
The robots will be used to do simple and routine work such as spraying, welding and assembling which are now mainly conducted by workers, said Gou at a workers’ dance party Friday night.
The company currently has 10,000 robots and the number will be increased to 300,000 next year and 1 million in three years, according to Gou.
Foxconn, the world’s largest maker of computer components which assembles products for Apple, Sony and Nokia, is in the spotlight after a string of suicides of workers at its massive Chinese plants, which some blamed on tough working conditions.
The company currently employs 1.2 million people, with about 1 million of them based on the Chinese mainland.
Source | xinhuanet
The robot is about the size of a quarter, with ten water-repellent, wire legs and two movable, oar-like legs propelled by two miniature motors. Because the weight of the microrobot is equal to that of about 390 water striders, one might expect that it would sink quickly when placed on the water surface. But it stands effortlessly on water surfaces and also walks and turns freely.
It imitates water striders, mosquitoes, and water spiders, who are able to walk on water due largely to their highly water-repellent (superhydrophobic) legs.
The bionic microrobot incorporates improvements over previous devices of this kind, making it a prime candidate for military spy missions, water pollution monitoring, and other applications, the scientists say.
Source | Kurzweil AI
Researchers at MIT are working on small, egg-sized robots designed to dive into nuclear reactors and swim through underground pipes, checking for signs of corrosion. The underwater patrollers, equipped with cameras, are able to withstand a reactor’s extreme, radioactive environment, transmitting images via wireless in real time.
They devised a special valve for switching the direction of a flow with a tiny change in pressure and embedded a network of the Y-shaped valves within the hull, or “skin,” of the small, spherical robot, using 3-D printing to construct the network of valves, layer by layer
As the robot navigates a pipe system, the onboard camera takes images along the pipe’s interior. The researchers are working to equip the robot with wireless underwater communications, using laser optics to transmit images in real time across distances of up to 100 meters. The team is also working on an “eyeball” mechanism that would let the camera pan and tilt in place.
Reactor inspectors currently monitor these pipes remotely by running an electric current through them to find corroded sections or using ultrasonic waves to identify cracks, but the robot can get a much closer view with its on-board camera that takes photos of the pipe’s interior.
In June, The Associated Press released results from a yearlong investigation, revealing evidence of “unrelenting wear” in many of the oldest-running facilities in the United States. That study found that three-quarters of the country’s nuclear reactor sites have leaked radioactive tritium from buried piping that transports water to cool reactor vessels, often contaminating groundwater. According to a recent report by the U.S. Government Accountability Office, the industry has limited methods to monitor underground pipes for leaks.
“We have 104 reactors in this country,” says Harry Asada, the Ford Professor of Engineering in the Department of Mechanical Engineering and director of MIT’s d’Arbeloff Laboratory for Information Systems and Technology. “Fifty-two of them are 30 years or older, and we need immediate solutions to assure the safe operations of these reactors.”
Asada says one of the major challenges for safety inspectors is identifying corrosion in a reactor’s underground pipes. Currently, plant inspectors use indirect methods to monitor buried piping: generating a voltage gradient to identify areas where pipe coatings may have corroded, and using ultrasonic waves to screen lengths of pipe for cracks. The only direct monitoring requires digging out the pipes and visually inspecting them — a costly and time-intensive operation.
Source | Kurzweil AI
Autonomous machines, networks, and robots will self-improve in the future by publishing their own upgrade suggestionsWednesday, July 20th, 2011
The best way for autonomous machines, networks, and robots to improve in the future will be for them to publish their own upgrade suggestions on the Internet, says Sandor Veres of the University of Southampton’s Faculty of Engineering and the Environment.
This leap to increased autonomy will be facilitated by machines and humans publishing information in a common language online. This can be achieved by the following technical features:
- Some modelling of a changing environment;
- Learning various skills in feedback interaction with the environment;
- Symbolic recognition of events and actions to perform logic-based computation;
- Ability to explain reasons of its own actions to humans; and
- Efficient transfer of rules, goals, values, and skills from human users to the autonomous system.
The last three of these five desirable technical features might be achieved with the natural language programming (NLP) sEnglish (“system English”) system, which enables shared understanding between machines and their users, says Veres.
The sEnglish system is already available, so authors can publish self-contained conceptual structures and procedure sentences in a natural language document in English in HTML and PDF formats. The authors can place these documents on the Internet for autonomous systems to read and share.
The intelligent system will discuss its potential upgrades with its users, lifting this burden from users and manufacturers Veres envisions. Long after their sale, machines will read technical documents from the Internet to improve their performance. These documents can be published not only by their original manufacturer but by user communities.
Veres says that some systems need to operate for extended periods of time without the possibility of high-level human supervision. Various unmanned craft such as underwater survey vehicles, underwater robots, spacecraft, and semi-autonomous aerial vehicles and are examples where loss of communication is either possible or inevitable, so they require autonomous control for extended periods.
Source | Kurzweil AI
An $18.5 million grant to establish an Engineering Research Center for Sensorimotor Neural Engineering based at the University of Washington (UW) has been announced by the National Science Foundation .
Researchers will develop new technologies for amputees, for spinal cord injuries, and people with cerebral palsy, stroke, Parkinson’s disease, or age-related neurological disorders.
Scientists at the UW and partner institutions will work to perform mathematical analysis of the body’s neural signals, design and test implanted and wearable prosthetic devices, and build new robotic systems.
“The center will work on robotic devices that interact with, assist and understand the nervous system,” said director Yoky Matsuoka, a UW associate professor of computer science and engineering at UW. “It will combine advances in robotics, neuroscience, electromechanical devices, and computer science to restore or augment the body’s ability for sensation and movement.”
Source | Kurzweil AI
Last winter, NASA technicians sent a humanoid robot dubbed Robonaut 2 to the International Space Station. R2, which has only a torso, sophisticated arms and fingers, and a head full of sensors, was the result of a joint effort by NASA and General Motors to create a robot that could operate safely alongside humans. Robots like R2 could carry out dangerous or tedious tasks on space missions, but they’d also be useful on the ground, where they could assist factory workers. Robots have typically been segregated from humans for safety reasons, but improvements mean they’re now poised to take on a wider variety of tasks.
Although robots have aided manufacturing for decades, they’ve tended to be bulky systems that require precise setup to do large-scale, repetitious tasks such as welding or painting a car door. But improved technologies for vision processing and gripping are leading to a new wave of robots. In June, President Obama announced a $500 million federal investment in manufacturing technology (including $70 million for robotics), to partners that include Ford, Caterpillar, MIT, Carnegie Mellon University, and others. Though the partnership does not include the R2 project, it represents another step in developing robots that can assist with repetitious or physically stressful assembly-line tasks without posing a safety risk.
“In manufacturing facilities, robots are basically in cages like wild animals … so you can’t get in there and get hurt,” says David Bourne, a professor at Carnegie Mellon who works on robotic manufacturing. Having “the robot and person work side-by-side is really scary to a lot of people,” he says. “If it swings around and hits you, it could take your head off.”
R2 uses a popular robotics technology called series elastic actuators in its joints. The actuators have an elastic spring component between the motor and the object the robot has to pick up. The actuators help the robot detect and control the force of its own movements.
“The use of series elastic actuators changes the whole approach to manufacturing robots. [It] makes the robot able to safely interact with people,” says Rodney Brooks, a cofounder of iRobot and founder of Heartland Robotics, which is developing inexpensive, adaptable manufacturing robots and has licensed the series elastic actuator.
In addition to its force-sensing joints, R2 is covered in soft material in case of accidental collisions, and its head is chock-full of cameras—including an infrared camera for depth sensing—so it can keep track of its human colleagues.
“The fact that Robonaut 2 is on the space station is a great example of how safe it is,” adds Brooks. “It’s very promising for assembly operations.”
Source | Technology Review
MENLO PARK, Calif. — The robotics pioneer Rodney Brooks often begins speeches by reaching into his pocket, fiddling with some loose change, finding a quarter, pulling it out and twirling it in his fingers.
The task requires hardly any thought. But as Dr. Brooks points out, training a robot to do it is a vastly harder problem for artificial intelligence researchers than I.B.M.’s celebrated victory on “Jeopardy!” this year with a robot named Watson.
Although robots have made great strides in manufacturing, where tasks are repetitive, they are still no match for humans, who can grasp things and move about effortlessly in the physical world.
Designing a robot to mimic the basic capabilities of motion and perception would be revolutionary, researchers say, with applications stretching from care for the elderly to returning overseas manufacturing operations to the United States (albeit with fewer workers).
Yet the challenges remain immense, far higher than artificial intelligence hurdles like speaking and hearing.
“All these problems where you want to duplicate something biology does, such as perception, touch, planning or grasping, turn out to be hard in fundamental ways,” said Gary Bradski, a vision specialist at Willow Garage, a robot development company based here in Silicon Valley.
“It’s always surprising, because humans can do so much effortlessly.”
Now the Defense Advanced Research Projects Agency, or Darpa, the Pentagon office that helped jump-start the first generation of artificial intelligence research in the 1960s, is underwriting three competing efforts to develop robotic arms and hands one-tenth as expensive as today’s systems, which often cost $100,000 or more.
Last month President Obama traveled to Carnegie Mellon University in Pittsburgh to unveil a $500 million effort to create advanced robotic technologies needed to help bring manufacturing back to the United States. But lower-cost computer-controlled mechanical arms and hands are only the first step.
There is still significant debate about how even to begin to design a machine that might be flexible enough to do many of the things humans do: fold laundry, cook or wash dishes. That will require a breakthrough in software that mimics perception.
Today’s robots can often do one such task in limited circumstances, but researchers describe their skills as “brittle.” They fail if the tiniest change is introduced. Moreover, they must be reprogrammed in a cumbersome fashion to do something else.
Many robotics researchers are pursuing a bottom-up approach, hoping that by training robots on one task at a time, they can build a library of tasks that will ultimately make it possible for robots to begin to mimic humans.
Others are skeptical, saying that truly useful machines await an artificial intelligence breakthrough that yields vastly more flexible perception.
The limits of today’s most sophisticated robots can be seen in a towel-folding demonstration that a group of students at the University of California, Berkeley, posted on the Internet last year: In spooky, anthropomorphic fashion, a robot deftly folds a series of towels, eyeing the corners, smoothing out wrinkles and neatly stacking them in a pile.
It is only when the viewer learns that the video is shown at 50 times normal speed that the meager extent of the robot’s capabilities becomes apparent. (The students acknowledged this spring that they were only now beginning to tackle the further challenges of folding shirts and socks.)
Even the most ambitious and expensive robot arm research has not yet yielded impressive results.
In February, for example, Robonaut 2, a dexterous robot developed in a partnership between NASA and General Motors, was carried aboard a space shuttle mission to be installed on the International Space Station. The developers acknowledged that the software required by the system, which is humanoid-shaped from the torso up, was unfinished and that the robot was sent up then only because a rare launching window was available.
“We’re in a funny chicken-and-egg situation,” Dr. Brooks said. “No one really knows what sensors or perceptual algorithms to use because we don’t have a working hand, and because we don’t have a grasping strategy nobody can figure out what kind of hand to design.”
Dr. Brooks is also tackling the problem: In 2008 he founded Heartland Robotics, a Boston-based company that is intent on building a generation of low-cost robots.
And the three competing efforts to develop robotic arms and hands with Darpa financing — at SRI International, Sandia National Laboratories and iRobot — offer some reasons for optimism.
Source | NY Times