Archive for the ‘Prosthetics’ Category
When it happened, emotions flashed like lightning.
The nearby robotic hand that Tim Hemmes was controlling with his mind touched his girlfriend Katie Schaffer’s outstretched hand.
One small touch for Mr. Hemmes; one giant reach for people with disabilities.
Tears of joy flowing in an Oakland laboratory that day continued later when Mr. Hemmes toasted his and University of Pittsburgh researchers’ success at a local restaurant with two daiquiris.
A simple act for most people proved to be a major advance in two decades of research that has proven to be the stuff of science fiction.
Mr. Hemmes’ success in putting the robotic hand in the waiting hand of Ms. Schaffer, 27, of Philadelphia, represented the first time a person with quadriplegia has used his mind to control a robotic arm so masterfully.
The 30-year-old man from Connoquenessing Township, Butler County, hadn’t moved his arms, hands or legs since a motorcycle accident seven years earlier. But Mr. Hemmes had practiced six hours a day, six days a week for nearly a month to move the arm with his mind.
That successful act increases hope for people with paralysis or loss of limbs that they can feed and dress themselves and open doors, among other tasks, with a mind-controlled robotic arm. It’s also improved the prospects of wiring around spinal cord injuries to allow motionless arms and legs to function once again.
“I think the potential here is incredible,” said Dr. Michael Boninger, director of UPMC’s Rehabilitation Institute and a principal investigator in the project. “This is a breakthrough for us.”
Mr. Hemmes? They say he’s a rock star.
In a project led by Andrew Schwartz, Ph.D., a University of Pittsburgh professor of neurobiology, researchers taught a monkey how to use a robotic arm mentally to feed itself marshmallows. Electrodes had been shallowly implanted in its brain to read signals from neurons known to control arm motion.
Electrocorticography or ECoG — in which an electronic grid is surgically placed against the brain without penetration — less intrusively captures brain signals.
ECoG has been used to locate sites of seizures and do other experiments in patients with epilepsy. Those experiments were prelude to seeking a candidate with quadriplegia to test ECoG’s capability to control a robotic arm developed by Johns Hopkins University.
The still unanswered question was whether the brains of people with long-term paralysis still produced signals to move their limbs.
ECoG picks up an array of brain signals, almost like a secret code or new language, that a computer algorithm can interpret and then move a robotic arm based on the person’s intentions. It’s a simple explanation for complex science.
Mr. Hemmes’ name cropped up so many times as a potential candidate that the team called him to gauge his interest.
He said no.
He already was involved in a research in Cleveland and feared this project would interfere. But knowing they had the ideal candidate, they called back. This time he agreed, as long as it would not limit his participation in future phases of research.
Mr. Hemmes became quadriplegic July 11, 2004, apparently after a deer darted onto the roadway, causing him to swerve his motorcycle onto gravel where his shoulder hit a mailbox, sending him flying headfirst into a guardrail. The top of his helmet became impaled on a guardrail I-beam, rendering his head motionless while his body continued flying, snapping his neck at the fourth cervical vertebra.
A passer-by found him with blue lips and no signs of breathing. Mr. Hemmes was flown by rescue helicopter to UPMC Mercy and diagnosed with quadriplegia — a condition in which he had lost use of his limbs and his body below the neck or shoulders. He had to learn how to breathe on his own. His doctor told him it was worst accident he’d ever seen in which the person survived.
But after the process of adapting psychologically to quadriplegia, Mr. Hemmes chose to pursue a full life, especially after he got a device to operate a computer and another to operate a wheelchair with head motions.
Since January, he has operated the website — www.Pittsburghpitbullrescue.com — to rescue homeless pit bulls and find them new owners.
The former hockey player’s competitive spirit and willingness to face risk were key attributes. Elizabeth Tyler-Kabara, the UPMC neurosurgeon who would install the ECoG in Mr. Hemmes’ brain, said he had strong motivation and a vision that paralysis could be cured.
Ever since his accident, Mr. Hemmes said, he’s had the goal of hugging his daughter Jaylei, now 8. This could be the first step.
“It’s an honor that they picked me, and I feel humbled,” Mr. Hemmes said.
Mr. Hemmes underwent several hours of surgery to install the ECoG at a precise location against the brain. Wires running under the skin down to a port near his collarbone — where wires can connect to the robotic arm — caused him a stiff neck for a few days.
Two days after surgery, he began exhaustive training on mentally maneuvering a computer cursor in various directions to reach and make targets disappear. Next he learned to move the cursor diagonally before working for hours to capture targets on a three-dimensional computer.
The U.S. Food and Drug Administration allowed the trial to last only 28 days, when the ECoG is removed. The project, initially funded by UPMC, has received more than $6 million in funding from the Department of Veterans Affairs, the National Institutes of Health, and the U.S. Department of Defense’s Defense Advanced Research Projects Agency, known as DARPA.
Initially Mr. Hemmes tried thinking about flexing his arm to move the cursor. But he had better success visually grabbing the ball-shaped cursor to throw it toward a target on the screen. The “mental eye-grabbing” worked best when he was relaxed.
Soon he was capturing 15 of 16 targets and sometimes all 16 during timed sessions. The next challenge was moving the robotic arm with his mind.
The same mental processes worked, but the arm moved more slowly and in real space. But time was ticking away as the experiment approached its final days last month. With Mr. Hemmes finally moving the arm in all directions, Wei Wang — assistant professor of physical medicine and rehabilitation at Pitt’s School of Medicine who also has worked on the signalling system — stood in front of him and raised his hand.
The robotic arm that Mr. Hemmes was controlling moved with fits and starts but in time reached Dr. Wang’s upheld hand. Mr. Hemmes gave him a high five.
The big moment arrived.
Katie Schaffer stood before her boyfriend with her hand extended. “Baby,” she said encouraging him, “touch my hand.”
It took several minutes, but he raised the robotic hand and pushed it toward Ms. Schaffer until its palm finally touched hers. Tears flowed.
“It’s the first time I’ve reached out to anybody in over seven years,” Mr. Hemmes said. “I wanted to touch Katie. I never got to do that before.”
“I have tattoos, and I’m a big, strong guy,” he said in retrospect. “So if I’m going to cry, I’m going to bawl my eyes out. It was pure emotion.”
Mr. Hemmes said his accomplishments represent a first step toward “a cure for paralysis.” The research team is cautious about such statements without denying the possibility. They prefer identifying the goal of restoring function in people with disabilities.
“This was way beyond what we expected,” Dr. Tyler-Kabara said. “We really hit a home run, and I’m thrilled.”
The next phase will include up to six people tested in another 30-day trial with ECoG. A year-long trial will test the electrode array that shallowly penetrates the brain. Goals during these phases include expanding the degrees of arm motions to allow people to “pick up a grape or grasp and turn a door knob,” Dr. Tyler-Kabara said.
Anyone interested in participating should call 1-800-533-8762.
Mr. Hemmes says he will participate in future research.
“This is something big, but I’m not done yet,” he said. “I want to hug my daughter.”
The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel. Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter. Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.
There are reasons for this refocusing of the lens or eye toward body. Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago. The body truly is changing, both biologically and technologically, at an abrupt rate. Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation. Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today. They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves. The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.
We see in the work of Orlan and Stelarc that the body becomes the malleable canvas. Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution. In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body. Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings. Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world. Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses. Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body. Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth. Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.
Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine. Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance. Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes. Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.
The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind. In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order. Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing. Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations. Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world. The installation includes an array of visual stimulation in a dark environment. As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes. This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.
Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs. By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement. Here we see the body acting as brush; literally painting the soundscape through its own creative motion. As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body. For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.
Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense. Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with. Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state. The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide. Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time. As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge. Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self. Can we begin to mold and sculpt the body much like we have done with the land we inhabit?
There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori. The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement. In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities. Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain. As you browse the work of The Biological Canvas I would like to invite your own imagination to engage. Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.
Source | VASA Project
It seems the sci-fi industry has done it again. Predictions made in novels like Johnny Mnemonic and Neuromancer back in the 1980s of neural implants linking our brains to machines have become a reality.
Back then it seemed unthinkable that we’d ever have megabytes stashed in our brain as Keanu Reeves’ character Johnny Mnemonic did in the movie based on William Gibson’s novel. Or that The Matrix character Neo could have martial arts abilities uploaded to his brain, making famous the line, “I know Kung Fu.” (Why Keanu Reeves became the poster boy of sci-fi movies, I’ll never know.) But today we have macaque monkeys that can control a robotic arm with thoughts alone. We have paraplegics given the ability to control computer cursors and wheelchairs with their brain waves. Of course this is about the brain controlling a device. But what about the other direction where we might have a device amplifying the brain? While the cochlear implant might be the best known device of this sort, scientists have been working on brain implants with the goal to enhance memory. This sort of breakthrough could lead to building a neural prosthesis to help stroke victims or those with Alzheimer’s. Or at the extreme, think uploading Kung Fu talent into our brains.
Decade-long work led by Theodore Berger at University of Southern California, in collaboration with teams from Wake Forest University, has provided a big step in the direction of artificial working memory. Their study is finally published today in the Journal of Neural Engineering. A microchip implanted into a rat’s brain can take on the role of the hippocampus—the area responsible for long-term memories—encoding memory brain wave patterns and then sending that same electrical pattern of signals through the brain. Back in 2008, Berger told Scientific American, that if the brain patterns for the sentence, “See Spot Run,” or even an entire book could be deciphered, then we might make uploading instructions to the brain a reality. “The kinds of examples [the U.S. Department of Defense] likes to typically use are coded information for flying an F-15,” Berger is quoted in the article as saying.
In this current study the scientists had rats learn a task, pressing one of two levers to receive a sip of water. Scientists inserted a microchip into the rat’s brain, with wires threaded into their hippocampus. Here the chip recorded electrical patterns from two specific areas labeled CA1 and CA3 that work together to learn and store the new information of which lever to press to get water. Scientists then shut down CA1 with a drug. And built an artificial hippocampal part that could duplicate such electrical patterns between CA1 and CA3, and inserted it into the rat’s brain. With this artificial part, rats whose CA1 had been pharmacologically blocked, could still encode long-term memories. And in those rats who had normally functioning CA1, the new implant extended the length of time a memory could be held.
The next step is to test the device in monkeys, and then in humans. Of course at this early stage a breakthrough like this brings up more questions than solutions. Memory is hugely complex, based on our individual experiences and perceptions. If we have the electrical pattern for the phrase, See Spot Run, mentioned above, would this mean the same thing for you as it does for me? How would such a device work within context? As writer Gary Stix asked in the Scientific American article, “Would “See Spot Run” be misinterpreted as laundry mishap instead of a trotting dog?” Or as the science journalist John Horgan once put it, you might hear your wedding song, but I hear a stale pop tune.
We are provided with the same structural blueprint for our brains, but its circuitry is built from experience and genetics, and this is a tapestry unique to each of us. Something that many scientists feel we’ll never be able to fully crack and decode, let alone insert into it an experiential memory.
Source | Smart Planet
Programming robotic arms just got a lot easier thanks to the efforts of Bernhard Kleiner and his team at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart.
The key breakthrough is a set of inertial sensors in a hand-held input device, and software that ties their inputs together to reconstruct a detailed model of body motion.
The device records precise motions without complicated calibration or configuration — the user makes a gesture, and the robotic arm mimics it. The device can be attached to the thigh of a patient to determine their gait for control of active prosthetic devices and for physical therapy, for example.
Programming a robotic arm to be used in a factory assembly line, for example, is usually a complex process, involving a hand-held baton with a marker point, a laser beam reflected from the marker, and a camera to recreate the motion, after careful calibration and configuration.
Source | Kurzweil AI
An Austrian man has voluntarily had his hand amputated so he can be fitted with a bionic limb.
The patient, called “Milo”, aged 26, lost the use of his right hand in a motorcycle accident a decade ago.
After his stump heals in several weeks’ time, he will be fitted with a bionic hand which will be controlled by nerve signals in his own arm.
The surgery is the second such elective amputation to be performed by Viennese surgeon Professor Oskar Aszmann.
The patient, a Serbian national who has lived in Austria since childhood, suffered injuries to a leg and shoulder when he skidded off his motorcycle and smashed into a lamppost in 2001 while on holiday in Serbia.
While the leg healed, what is called a “brachial plexus” injury to his right shoulder left his right arm paralysed. Nerve tissue transplanted from his leg by Professor Aszmann restored movement to his arm but not to his hand.
A further operation involving the transplantation of muscle and nerve tissue into his forearm also failed to restore movement to the hand, but it did at least boost the electric signals being delivered from his brain to his forearm, signals that could be used to drive a bionic hand.
Then three years ago, Milo was asked whether he wanted to consider elective amputation.
“The operation will change my life. I live 10 years with this hand and it cannot be (made) better. The only way is to cut this down and I get a new arm,” Milo told BBC News prior to his surgery at Vienna’s General Hospital.
Milo took the decision after using a hybrid hand fitted parallel to his dysfunctional hand with which he could experience controlling a prosthesis.
Such bionic hands, manufactured by the German prosthetics company Otto Bock, can pinch and grasp in response to signals from the brain that are picked up by two sensors placed over the skin above nerves in the forearm.
In effect, the patient controls the hand using the same brain signals that would have once powered similar movements in the real hand.
The wrist of the prosthesis can be rotated manually using the patient’s other functioning hand (if the patient has one).
Andrei Ninu of prosthetics company Otto Bock explains the bionic hand to the BBC’s Neil Bowdler
Last year, a 24-year-old Austrian named Patrick was the first patient in the world to choose to have his hand amputated, again by Professor Aszmann, and a bionic replacement fitted. He lost the use of his left hand after being electrocuted at work.
He can now open a bottle quickly and tie his own shoelaces.
“My reaction was ‘Oh my god, I’ve got a new hand!’,” he told BBC News.
“I can do functions which I did with my normal hand with the prosthetic arm,” he said, recalling his response to first being fitted with a bionic hand.
“I think it was very cool – I did not do things with my hand for three years and then you put on the new hand and one moment later, you can move it. It’s great.”
Patrick is already testing a new hand, which its makers say will give him much greater movement. The hand has six sensors fitted over nerves within the lower arm, rather than the two on his current prosthesis.
Multiple signals can be read simultaneously, enabling the patient to twist and flex their wrist back and forward, again using the same brain signals that would have powered similar movement in the real hand.
Professor Oskar Aszmann prefers to calls these elective amputations “bionic reconstruction” and has been working closely with Otto Bock, who have a research and production facility in Vienna.
Elective amputee Patrick shows what he can do with his bionic hand and tests a new hand with additional wrist movement
Before the first operation, the professor held a symposium to discuss the procedure, to which senior surgeons and a theologian were invited.
He believes elective amputations are the best option for patients who have lost hand movement and who have no hope of regaining that movement through surgery.
“You see a patient come to you with a tremendous need for hand function and it’s only a thought away to come to the next conclusion,” he said.
“If the patient cannot address his only hand and I can change his anatomy in a way so he can communicate with an artificial hand, then of course I’ll just take away what’s there and provide a technological hand for him.”
But Professor Aszmann has faced opposition in some quarters, with senior colleagues even requesting he cancel this latest operation – requests the professor promptly rejected.
He said the alternative for patients like Milo would be years of pointless surgery.
“Milorad is now 26 years old and he wants to go on with his life. To biologically reconstruct a hand for him would be a never-ending story and in the end he would still have a non-functional hand.
“It is in the patient’s interest to provide him with a solution he can live with properly and successfully, and so I have no problem with cutting off his hand.”
In the event, the amputation itself passed without incident.
Scar tissue from a previous operation was removed and then the hand cut off with a pneumatic saw. Tissue was then taken from the hand and transplanted to the wrist to provide a cushion for the prosthesis.
Speaking from his hospital bed following the surgery, Milo was a little drowsy, but as positive as ever.
“I feel good,” he said, his bandaged arm lying on a cushion besides him.
“I’m happy that it’s over and look forward.”
Source | BBC News
Light-sensitive plastic might be key to repairing damaged retinas. Creating neuro-prosthetic devices such as retinal implants is tricky because biological tissue doesn’t mix well with electronics. Metals and inorganic semiconductor materials can adversely affect the health or function of nerve cells, says Fabio Benfenati at the Italian Institute of Technology in Milan. And over time the body’s natural defences can be incredibly hostile and corrosive to such materials.
The emergence of flexible, organic semiconductor materials now offers an alternative. To test them, Benfenati and colleagues seeded nerve cells onto the surface of a light-sensitive semiconducting polymer similar to those used in some solar cells. The cells grew into extensive networks containing thousands of neurons. “We have proved that the materials are highly biocompatible,” says Benfenati.
What’s more, the presence of the cells did not interfere with the optical properties of the polymer. The team were able to use the neuron-coated polymer as an electrode in a light-driven electrolytic cell.
Artificial colour vision
When short pulses of light were aimed at specific sections of the polymer, only local neurons fired, suggesting the material has the spatial selectivity needed for artificial retinas, says Benfenati.
“It’s very elegant science,” says Robert Greenberg, whose company Second Sight is close to receiving clinical approval for its retinal prosthesis. But Greenberg questions whether the electrical currents generated would be sufficient to stimulate nerve cells in the eye.
It’s still too early to tell, says Benfenati. But he thinks the new material is worth further study, because of another benefit. It can be tuned to respond only to specific wavelengths of light, raising the prospect of creating artificial colour vision, he says.
This could be the year when we quit dragging ourselves to work and send remote-controlled robot avatars instead
Why drag yourself to work through rush-hour traffic when you can stay at home and send a remote-controlled robot instead?
Firms in the US and Japan are already selling robot avatars that allow office workers to be in two places at once. So 2011 could be the year when many of us find ourselves sitting across the desk from an electronic colleague.
The QB, which looks like a small Segway vehicle with a robot head on top, can travel at 6 kilometres per hour, using a laser scanner to avoid books and other office clutter.
It can be controlled via a web browser from anywhere in the world and has camera eyes to allow you to navigate your robot’s surroundings and see who you are talking to. A small LCD screen on the head means your colleagues can see you too.
You could argue that if you were planning to talk to people in other offices you could just use a videoconferencing system rather than a $15,000 robot. But logging into a robot body allows people to move around in a relatively normal way, says Trevor Blackwell of Anybots.
“If you have a bunch of people who are all used to talking to each other wherever they want to, it is a bit of an imposition to say, ‘OK, from now on all conversations have to be in the videoconferencing room’.”
Talking to a robot colleague might feel strange at first, but people seem to get used to it quite quickly. “Someone recently came to the office asking for me, and a colleague told them they had just seen me,” says Blackwell. “But actually it was the robot they had seen. I was still at home.”
Source | New Scientist
“The law of accelerating returns is the only reliable method I know that allows us to forecast at least certain aspects of the future,” said Ray Kurzweil in “Why Do We Need Predictions?,” a New York Times special feature published Monday.
“A computer that fit inside a building when I was a student now fits in my pocket, and is a thousand times more powerful despite being a million times less expensive. In another quarter century, that capability will fit inside a red blood cell and will again be a billion times more powerful per dollar.”
Source | Kurzweil AI
No more will soldiers’ vision be limited to the socket-embedded spheres that God intended. The Pentagon now wants troops to see dangers lurking behind them in real time, and be able to tell if an object a kilometer away is a walking stick or an AK-47.
In a solicitation released today, Darpa, the Pentagon’s far-out research branch, unveiled the Soldier Centric Imaging via Computational Cameras effort, or SCENICC. Imagine a suite of cameras that digitally capture a kilometer-wide, 360-degree sphere, representing the image in 3-D (!) onto a wearable eyepiece.
You’d be able to literally see all around you, including behind yourself, and zooming in at will, creating a “stereoscopic/binocular system, simultaneously providing 10x zoom to both eyes.” And you would do this all hands-free, apparently by barking out or pre-programming a command (the solicitation leaves it up to a designer’s imagination) to adjust focus.
Then comes the Terminator-vision. Darpa wants the eyepiece to include “high-resolution computer-enhanced imagery as well as task-specific non-image data products such as mission data overlays, threat warnings/alerts, targeting assistance, etc.” Target identified: Sarah Connor… The “Full Sphere Awareness” tool will provide soldiers with “muzzle flash detection,” “projectile tracking” and “object recognition/labeling,” basically pointing key information out to them.
And an “integrated weapon sighting” function locks your gun on your target when acquired. That’s far beyond an app mounted on your rifle that keeps track of where your friendlies and enemies are.
The imaging wouldn’t just be limited to what any individual soldier sees. SCENICC envisions a “networked optical sensing capability” that fuses images taken from nodes worn by “collections of soldiers and/or unmanned vehicles.” The Warrior-Alpha drone overhead? Its full-motion video and still images would be sent into your eyepiece.
It also has to be ridiculously lightweight, weighing less than 700 grams for the entire system — including a battery powerful enough to “exceed 24 hours [usage] under normal conditions.” That’s about a pound and a half, maximum. The Army’s experimental ensemble of wearable gadgets weighs about eight pounds. And it is to SCENICC what your Roomba is to the T-1000.
Here’s how far advanced SCENICC is compared to bleeding-edge imaging and networking capabilities that the Army is currently developing. Right now, the Army’s asking three different companies — Raytheon, Rockwell Collins and General Dynamics — to build a wearable platform of digital maps, computers and radios, networked with one another. Soldiers would have warzone maps beamed onto helmet-mounted eyepieces.
The system, known as Nett Warrior, needs to weigh less than eight pounds, and it builds on a years-long and ultimately fruitless effort called Land Warrior. (One of the problems with Land Warrior is it was heavy and cumbersome, owing in part to battery weight.) The Army hopes to choose one of the Nett Warrior designs by March.
By the time it’ll actually roll out Nett Warrior after testing, production and deployment — a few years, optimistically — SCENICC will already be hard at work on its replacement. Darpa wants a hands-free zooming function within two years of work on the contract. By year three, the computer-enhanced vision tool needs to be ready. Year four is for 360-degree vision. Then it’s on to development.
The Army is generally hot for combat-ready smartphones to keep soldiers linked up with each other. And the buzz-generating tool for the soldier of the near future is mapping technology, delivered onto a smartphone or some other handheld mobile device, at least judging from this year’s Association of the U.S. Army confab.
But all of these representation tools are two-dimensional, and require soldiers to look away from their patrols in order to use them. Textron’s SoldierEyes Common Operating Picture, for instance, lets soldiers see icons on a tablet-mounted map telling them where their friends, enemies and neutrals are. It can’t put those icons onto a 3-D picture sent to a soldier’s eyes, let alone allow a 10x zoom for a kilo-wide 360-degree field of vision. Why would anyone use a map on a smartphone when they could have SCENICC?
Even with all the advances in digital imaging, it’ll be a tall order to put together 360-degree vision and 10x zoom and mapping software and integration with weapons systems and lightweight miniaturization and network connectivity.
Darpa doesn’t really address how the system’s networked optics would work in low-bandwidth areas like, say, eastern Afghanistan (though maybe drone-borne cell towers can help).
Indeed, judging from the solicitation, while SCENICC is supposed to be networked, it doesn’t seem to have any communications requirements for soldiers to talk through what their optics are sharing with each other. Maybe there’s a role for those new soldier smartphones after all.
Source | Wired
The performance of a brain-machine interface designed to help paralyzed subjects move objects with their thoughts is improved with the addition of a robotic arm that provides sensory feedback, a new study from the University of Chicago finds.
Devices that translate brain activity into the movement of a computer cursor or an external robotic arm have already proven successful in humans. But in these early systems, vision was the only tool a subject could use to help control the motion.
Adding a robot arm that provided kinesthetic information about movement and position in space improved the performance of monkeys using a brain-machine interface in a study published today in The Journal of Neuroscience. Incorporating this sense may improve the design of “wearable robots” to help patients with spinal cord injuries, researchers said.
“A lot of patients that are motor-disabled might have partial sensory feedback,” said Nicholas Hatsopoulos, PhD, Associate Professor and Chair of Computational Neuroscience at the University of Chicago. “That got us thinking that maybe we could use this natural form of feedback with wearable robots to provide that kind of feedback.”
In the experiments, monkeys controlled a cursor without actively moving their arm via a device that translated activity in the primary motor cortex of their brain into cursor motion. While wearing a sleeve-like robotic exoskeleton that moved their arm in tandem with the cursor, the monkey’s control of the cursor improved, hitting targets faster and via straighter paths than without the exoskeleton.
“We saw a 40 percent improvement in cursor control when the robotic exoskeleton passively moved the monkeys’ arm,” Hatsopoulos said. “This could be quite significant for daily activities being performed by a paralyzed patient that was equipped with such a system.”
When a person moves their arm or hand, they use sensory feedback called proprioception to control that motion. For example, if one reaches out to grab a coffee mug, sensory neurons in the arm and hand send information back to the brain about where one’s limbs are positioned and moving. Proprioception tells a person where their arm is positioned, even if their eyes are closed.
But in patients with conditions where sensory neurons die out, executing basic motor tasks such as buttoning a shirt or even walking becomes exceptionally difficult. Paraplegic subjects in the early clinical trials of brain-machine interfaces faced similar difficulty in attempting to move a computer cursor or robot arm using only visual cues. Those troubles helped researchers realize the importance of proprioception feedback, Hatsopoulos said.
“In the early days when we were doing this, we didn’t even consider sensory feedback as an important component of the system,” Hatsopoulos said. “We really thought it was just one-way: signals were coming from the brain, and then out to control the limb. It’s only more recently that the community has really realized that there is this loop with feedback coming back.”
Reflecting this loop, the researchers on the new study also observed changes in the brain activity recorded from the monkeys when sensory feedback was added to the set-up. With proprioception feedback, the information in the cell firing patterns of the primary motor cortex contained more information than in trials with only visual feedback, Hatsopoulos said, reflecting an improved signal-to-noise ratio.
The improvement seen from adding proprioception feedback may inform the next generation of brain-machine interface devices, Hatsopoulos said. Already, scientists are developing different types of “wearable robots” to augment a person’s natural abilities. Combining a decoder of cortical activity with a robotic exoskeleton for the arm or hand can serve a dual purpose: allowing a paralyzed subject to move the limb, while also providing sensory feedback.
To benefit from this solution, a paralyzed patient must have retained some residual sensory information from the limbs despite the loss of motor function – a common occurrence, Hatsopoulos said, particularly in patients with ALS, locked-in syndrome, or incomplete spinal cord injury. For patients without both motor and sensory function, direct stimulation of sensory cortex may be able to simulate the sensation of limb movement. Further research in that direction is currently underway, Hatsopoulos said.
“I think all the components are there; there’s nothing here that’s holding us back conceptually,” Hatsopoulos said. “I think using these wearable robots and controlling them with the brain is, in my opinion, probably the most promising approach to take in helping paralyzed individuals regain the ability to move.”
The paper, “Incorporating feedback from multiple sensory modalities enhances brain-machine interface control,” appears in the Dec. 15 issue of The Journal of Neuroscience. Authors on the paper are Aaron J. Suminski, Dennis C. Tkach, and Hatsopoulos of the University of Chicago, and Andrew H. Fagg of the University of Oklahoma.
Funding for the research was provided by the National Institute of Neurological Disorders and Stroke and the Paralyzed Veterans of America Research Foundation.
Source | University of Chicago
The World Health Organization estimates that 25 million people worldwide are affected by over 1,000 genetic conditions. There are approximately 24.6 million people alive today that have been diagnosed with cancer within the last five years. In the United States alone, 101,000 people are currently waiting for an organ transplant, and the number grows 300 people each month, according the Mayo Clinic.
Human intelligence has not advanced at the speed of accelerating technologies. What can we do to advance our own human physiology? The following is a list of possible transhuman must haves for the 21st century:
1. Brain Enhancement: Metabrain prosthetic, which includes
an observational feedback field
• AGI decision assistant
• cognitive error correction task pane with auto-correct options
• multiple viewpoints window with drop down elements
2. Body Enhancement: Whole-Body prosthetics, which includes:
• In vivo fiber optic spine
• Atmospheric physical stimuli sensors
• Solar protective nanoskin
• Regenerative organs
• Exoskeleton mobility device
• Replaceable genes
3. Behavior Enhancement: Psychology prosthetic, which includes:
• Awareness Customization
• Connectivity macros
• Empathic ping layers
• Finessed emotions helper
• Persona multiplier and recorder
• Seamless relay between platforms, avatars and syn-bios
4. Style Enhancement: Aesthetics prosthetic, which includes:
• Wearable or in-system options
• Transhuman haute couture clipboard
• Radical style click and drag option
• Day-to-night shape shifting
• Customized tone, texture and hue change
5. System Care: Warranty prosthetic, which includes:
• Additional 24 chromosomal pairing
• Guarantee for genetic or code mutations or defects
• Upgradable immune system and anti-virus system
Natasha Vita-More is a media artist/designer, Founder and Director of Transhumanist Arts & Culture, and Artistic Director of H+ Laboratory.
Source | H+ Magazine
When Waffa Bilal turns around for the next year, a camera surgically embedded in the back of his head will stare back at you.
The camera, which will be stuck to the NYU photo professor’s head via a “piercing-like attachment” when he goes through surgery in the next couple of weeks according to the WSJ, will take photos every 60 seconds and beam them to monitors at the Mathaf: Arab Museum of Modern Art in Qatar.
The artwork’s called “The 3rd I,” and it’ll be ongoing for an entire year. While it’s mostly intended as a comment on memory and experience—something that’s changed immensely with the advent of digital storage and the possibilites of limitless memory—interestingly, it’s mostly sparking a debate about privacy: When Bilal’s on campus at NYU, where he’ll be actively teaching, he’s going to keep the camera covered with a lens cap. The thing is, it’s not so far from the realm of possibility that we’ll all be recording nearly every moment of our lives in the not-too-distant future—Microsoft’s already got a camera that tries to.
Source | Gizmodo