Archive for the ‘Science of the Mind’ Category
“Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton,” said study leader Miguel Nicolelis, MD, PhD, professor of neurobiology at Duke University Medical Center and co-director of the Duke Center for Neuroengineering.
Sensing textures of virtual objects
Without moving any part of their real bodies, the monkeys used their electrical brain activity to direct the virtual hands of an avatar to the surface of virtual objects and differentiate their textures. Although the virtual objects employed in this study were visually identical, they were designed to have different artificial textures that could only be detected if the animals explored them with virtual hands controlled directly by their brain’s electrical activity.
The texture of the virtual objects was expressed as a pattern of electrical signals transmitted to the monkeys’ brains. Three different electrical patterns corresponded to each of three different object textures.
Because no part of the animal’s real body was involved in the operation of this brain-machine-brain interface, these experiments suggest that in the future, patients who were severely paralyzed due to a spinal cord lesion may take advantage of this technology to regain mobility and also to have their sense of touch restored, said Nicolelis.
First bidirectional link between brain and virtual body
“This is the first demonstration of a brain-machine-brain interface (BMBI) that establishes a direct, bidirectional link between a brain and a virtual body,” Nicolelis said.
“In this BMBI, the virtual body is controlled directly by the animal’s brain activity, while its virtual hand generates tactile feedback information that is signaled via direct electrical microstimulation of another region of the animal’s cortex. We hope that in the next few years this technology could help to restore a more autonomous life to many patients who are currently locked in without being able to move or experience any tactile sensation of the surrounding world,” Nicolelis said.
“This is also the first time we’ve observed a brain controlling a virtual arm that explores objects while the brain simultaneously receives electrical feedback signals that describe the fine texture of objects ‘touched’ by the monkey’s newly acquired virtual hand.
“Such an interaction between the brain and a virtual avatar was totally independent of the animal’s real body, because the animals did not move their real arms and hands, nor did they use their real skin to touch the objects and identify their texture. It’s almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves.”
The combined electrical activity of populations of 50 to 200 neurons in the monkey’s motor cortex controlled the steering of the avatar arm, while thousands of neurons in the primary tactile cortex were simultaneously receiving continuous electrical feedback from the virtual hand’s palm that let the monkey discriminate between objects, based on their texture alone.
Robotic exoskeleton for paralyzed patients
“The remarkable success with non-human primates is what makes us believe that humans could accomplish the same task much more easily in the near future,” Nicolelis said.
The findings provide further evidence that it may be possible to create a robotic exoskeleton that severely paralyzed patients could wear in order to explore and receive feedback from the outside world, Nicolelis said. The exoskeleton would be directly controlled by the patient’s voluntary brain activity to allow the patient to move autonomously. Simultaneously, sensors distributed across the exoskeleton would generate the type of tactile feedback needed for the patient’s brain to identify the texture, shape and temperature of objects, as well as many features of the surface upon which they walk.
This overall therapeutic approach is the one chosen by the Walk Again Project, an international, non-profit consortium, established by a team of Brazilian, American, Swiss, and German scientists, which aims at restoring full-body mobility to quadriplegic patients through a brain-machine-brain interface implemented in conjunction with a full-body robotic exoskeleton.
The international scientific team recently proposed to carry out its first public demonstration of such an autonomous exoskeleton during the opening game of the 2014 FIFA Soccer World Cup that will be held in Brazil.
Ref.: Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, and Miguel A. L. Nicolelis, Active tactile exploration using a brain–machine–brain interface, Nature, October 2011 [doi:10.1038/nature10489]
Source | KurzweilAI
When it happened, emotions flashed like lightning.
The nearby robotic hand that Tim Hemmes was controlling with his mind touched his girlfriend Katie Schaffer’s outstretched hand.
One small touch for Mr. Hemmes; one giant reach for people with disabilities.
Tears of joy flowing in an Oakland laboratory that day continued later when Mr. Hemmes toasted his and University of Pittsburgh researchers’ success at a local restaurant with two daiquiris.
A simple act for most people proved to be a major advance in two decades of research that has proven to be the stuff of science fiction.
Mr. Hemmes’ success in putting the robotic hand in the waiting hand of Ms. Schaffer, 27, of Philadelphia, represented the first time a person with quadriplegia has used his mind to control a robotic arm so masterfully.
The 30-year-old man from Connoquenessing Township, Butler County, hadn’t moved his arms, hands or legs since a motorcycle accident seven years earlier. But Mr. Hemmes had practiced six hours a day, six days a week for nearly a month to move the arm with his mind.
That successful act increases hope for people with paralysis or loss of limbs that they can feed and dress themselves and open doors, among other tasks, with a mind-controlled robotic arm. It’s also improved the prospects of wiring around spinal cord injuries to allow motionless arms and legs to function once again.
“I think the potential here is incredible,” said Dr. Michael Boninger, director of UPMC’s Rehabilitation Institute and a principal investigator in the project. “This is a breakthrough for us.”
Mr. Hemmes? They say he’s a rock star.
In a project led by Andrew Schwartz, Ph.D., a University of Pittsburgh professor of neurobiology, researchers taught a monkey how to use a robotic arm mentally to feed itself marshmallows. Electrodes had been shallowly implanted in its brain to read signals from neurons known to control arm motion.
Electrocorticography or ECoG — in which an electronic grid is surgically placed against the brain without penetration — less intrusively captures brain signals.
ECoG has been used to locate sites of seizures and do other experiments in patients with epilepsy. Those experiments were prelude to seeking a candidate with quadriplegia to test ECoG’s capability to control a robotic arm developed by Johns Hopkins University.
The still unanswered question was whether the brains of people with long-term paralysis still produced signals to move their limbs.
ECoG picks up an array of brain signals, almost like a secret code or new language, that a computer algorithm can interpret and then move a robotic arm based on the person’s intentions. It’s a simple explanation for complex science.
Mr. Hemmes’ name cropped up so many times as a potential candidate that the team called him to gauge his interest.
He said no.
He already was involved in a research in Cleveland and feared this project would interfere. But knowing they had the ideal candidate, they called back. This time he agreed, as long as it would not limit his participation in future phases of research.
Mr. Hemmes became quadriplegic July 11, 2004, apparently after a deer darted onto the roadway, causing him to swerve his motorcycle onto gravel where his shoulder hit a mailbox, sending him flying headfirst into a guardrail. The top of his helmet became impaled on a guardrail I-beam, rendering his head motionless while his body continued flying, snapping his neck at the fourth cervical vertebra.
A passer-by found him with blue lips and no signs of breathing. Mr. Hemmes was flown by rescue helicopter to UPMC Mercy and diagnosed with quadriplegia — a condition in which he had lost use of his limbs and his body below the neck or shoulders. He had to learn how to breathe on his own. His doctor told him it was worst accident he’d ever seen in which the person survived.
But after the process of adapting psychologically to quadriplegia, Mr. Hemmes chose to pursue a full life, especially after he got a device to operate a computer and another to operate a wheelchair with head motions.
Since January, he has operated the website — www.Pittsburghpitbullrescue.com — to rescue homeless pit bulls and find them new owners.
The former hockey player’s competitive spirit and willingness to face risk were key attributes. Elizabeth Tyler-Kabara, the UPMC neurosurgeon who would install the ECoG in Mr. Hemmes’ brain, said he had strong motivation and a vision that paralysis could be cured.
Ever since his accident, Mr. Hemmes said, he’s had the goal of hugging his daughter Jaylei, now 8. This could be the first step.
“It’s an honor that they picked me, and I feel humbled,” Mr. Hemmes said.
Mr. Hemmes underwent several hours of surgery to install the ECoG at a precise location against the brain. Wires running under the skin down to a port near his collarbone — where wires can connect to the robotic arm — caused him a stiff neck for a few days.
Two days after surgery, he began exhaustive training on mentally maneuvering a computer cursor in various directions to reach and make targets disappear. Next he learned to move the cursor diagonally before working for hours to capture targets on a three-dimensional computer.
The U.S. Food and Drug Administration allowed the trial to last only 28 days, when the ECoG is removed. The project, initially funded by UPMC, has received more than $6 million in funding from the Department of Veterans Affairs, the National Institutes of Health, and the U.S. Department of Defense’s Defense Advanced Research Projects Agency, known as DARPA.
Initially Mr. Hemmes tried thinking about flexing his arm to move the cursor. But he had better success visually grabbing the ball-shaped cursor to throw it toward a target on the screen. The “mental eye-grabbing” worked best when he was relaxed.
Soon he was capturing 15 of 16 targets and sometimes all 16 during timed sessions. The next challenge was moving the robotic arm with his mind.
The same mental processes worked, but the arm moved more slowly and in real space. But time was ticking away as the experiment approached its final days last month. With Mr. Hemmes finally moving the arm in all directions, Wei Wang — assistant professor of physical medicine and rehabilitation at Pitt’s School of Medicine who also has worked on the signalling system — stood in front of him and raised his hand.
The robotic arm that Mr. Hemmes was controlling moved with fits and starts but in time reached Dr. Wang’s upheld hand. Mr. Hemmes gave him a high five.
The big moment arrived.
Katie Schaffer stood before her boyfriend with her hand extended. “Baby,” she said encouraging him, “touch my hand.”
It took several minutes, but he raised the robotic hand and pushed it toward Ms. Schaffer until its palm finally touched hers. Tears flowed.
“It’s the first time I’ve reached out to anybody in over seven years,” Mr. Hemmes said. “I wanted to touch Katie. I never got to do that before.”
“I have tattoos, and I’m a big, strong guy,” he said in retrospect. “So if I’m going to cry, I’m going to bawl my eyes out. It was pure emotion.”
Mr. Hemmes said his accomplishments represent a first step toward “a cure for paralysis.” The research team is cautious about such statements without denying the possibility. They prefer identifying the goal of restoring function in people with disabilities.
“This was way beyond what we expected,” Dr. Tyler-Kabara said. “We really hit a home run, and I’m thrilled.”
The next phase will include up to six people tested in another 30-day trial with ECoG. A year-long trial will test the electrode array that shallowly penetrates the brain. Goals during these phases include expanding the degrees of arm motions to allow people to “pick up a grape or grasp and turn a door knob,” Dr. Tyler-Kabara said.
Anyone interested in participating should call 1-800-533-8762.
Mr. Hemmes says he will participate in future research.
“This is something big, but I’m not done yet,” he said. “I want to hug my daughter.”
Since the 1930s, the theory of computation has profoundly influenced philosophical thinking about topics such as the theory of the mind, the nature of mathematical knowledge and the prospect of machine intelligence. In fact, it’s hard to think of an idea that has had a bigger impact on philosophy.
And yet there is an even bigger philosophical revolution waiting in the wings. The theory of computing is a philosophical minnow compared to the potential of another theory that is currently dominating thinking about computation.
At least, this is the view of Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology. Today, he puts forward a persuasive argument that computational complexity theory will transform philosophical thinking about a range of topics such as the nature of mathematical knowledge, the foundations of quantum mechanics and the problem of artificial intelligence.
Computational complexity theory is concerned with the question of how the resources needed to solve a problem scale with some measure of the problem size, call it n. There are essentially two answers. Either the problem scales reasonably slowly, like n, n^2 or some other polynomial function of n. Or it scales unreasonably quickly, like 2^n, 10000^n or some other exponential function of n.
So while the theory of computing can tell us whether something is computable or not, computational complexity theory tells us whether it can be achieved in a few seconds or whether it’ll take longer than the lifetime of the Universe.
That’s hugely significant. As Aaronson puts it: “Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number.”
He goes on to say that it’s easy to imagine that once we know whether something is computable or not, the problem of how long it takes is merely one of engineering rather than philosophy. But he then goes on to show how the ideas behind computational complexity can extend philosophical thinking in many areas.
Take the problem of artificial intelligence and the question of whether computers can ever think like humans. Roger Penrose famously argues that they can’t in his book The Emperor’s New Mind. He says that whatever a computer can do using fixed formal rules, it will never be able to ‘see’ the consistency of its own rules. Humans, on the other hand, can see this consistency.
One way to measure the difference between a human and computer is with a Turing test. The idea is that if we cannot tell the difference between the responses given by a computer and a human, then there is no measurable difference.
But imagine a computer that records all conversations it hears between humans. Over time, this computer will build up a considerable database that it can use to make conversation. If it is asked a question, it looks up the question in its database and reproduces the answer given by a real human.
In this way a computer with a big enough look up table can always have a conversation that is essentially indistinguishable from one that humans would have
“So if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory,” says Aaronson.
Instead, a more fruitful way forward is to think about the computational complexity of the problem. He points out that while the database (or look up table) approach “works,” it requires computational resources that grow exponentially with the length of the conversation.
Aaronson points out that this leads to a powerful new way to think about the problem of AI. He says that Penrose could say that even though the look up table approach is possible in principle, it is effectively impractical because of the huge computational resources it requires.
By this argument, the difference between humans and machines is essentially one of computational complexity.
That’s an interesting new line of thought and just one of many that Aaronson explores in detail in this essay.
Of course, he acknowledges the limitations of computational complexity theory. Many of the fundamental tenets of the theory, such as P ≠ NP, are unproven; and many of the ideas only apply to serial, deterministic Turing machines, rather than the messier kind of computing that occurs in nature.
But he says these criticisms do not allow philosophers (or anybody else) to arbitrarily dismiss the arguments of complexity theory. Indeed, many of these criticisms raise interesting philosophical questions in themselves.
Computational complexity theory is a relatively new discipline which builds on advances made in the 70s, 80s and 90s. And that’s why it’s biggest impacts are yet to come.
Aaronson points us in the direction of some of them in an essay that is thought provoking, entertaining and highly readable. If you have an hour or two to spare, it’s worth a read.
Source | Technology Review
The researchers ran animal subjects through a temporal-order memory task in which a sequence of two visual objects were presented and the subjects were required to retrieve that same sequence after a delay. To perform the task correctly, the subjects needed to remember both the individual visual items (“what”) and their temporal order (“when”). During the experiment, the researchers monitored the activity of individual brain cells in the subjects’ medial temporal lobe (MTL).
Their results showed that two main areas of the MTL are involved in integrating “what” and “when”: the hippocampus and the perirhinal cortex. The hippocampus, which is known to have an important role in a variety of memory tasks, provides an incremental timing signal between key events, giving information about the passage of time from the last event as well as the estimated time toward the next event. The perirhinal cortex appeared to integrate information about what and when by signaling whether a particular item was shown first or second in the series.
Their findings provide insight into the specific patterns of brain activity that enable us to remember both the key events that make up our lives and the specific order in which they happened, the researchers said.
Source | Kurzweil AI
Visual perception can be contaminated by memories of what we have recently seen, impairing our ability to properly understand and act on what we are currently seeing, researchers at Vanderbilt University have found.
The researchers used a visual illusion called “motion repulsion” to learn whether information held in working memory affects perception. This illusion is produced when two sets of moving dots are superimposed, with dots in one set moving in a different direction from those in the other set. Under these conditions, people tend to misperceive the actual directions of motion, and perceive a larger difference between the two sets of motions than actually exists.
Ordinarily this illusion is produced by having people view both sets of motion at the same time. The researchers set out to determine if the illusion would occur when one set of motions, rather than being physically present, was held in working memory.
Participants were shown a random pattern of dots and were asked to remember the direction in which the dots were moving. They were then were shown a second pattern of moving dots. They were asked to report on the direction of second dots’ movement.
The researchers found the subjects’ reports of the second dots’ movement were exaggerated and influenced by what they had previously seen. If they were first shown dots moving in one direction and later shown dots moving in a slightly counterclockwise direction relative to the first presented dots, they reported the counterclockwise movement to be more dramatic than it had actually been.
Their findings provide compelling evidence that visual working memory representations directly interact with the same neural mechanisms involved in processing basic sensory events, the researchers said.
Source | Kurzweil AI
The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel. Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter. Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.
There are reasons for this refocusing of the lens or eye toward body. Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago. The body truly is changing, both biologically and technologically, at an abrupt rate. Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation. Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today. They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves. The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.
We see in the work of Orlan and Stelarc that the body becomes the malleable canvas. Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution. In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body. Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings. Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world. Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses. Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body. Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth. Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.
Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine. Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance. Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes. Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.
The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind. In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order. Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing. Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations. Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world. The installation includes an array of visual stimulation in a dark environment. As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes. This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.
Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs. By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement. Here we see the body acting as brush; literally painting the soundscape through its own creative motion. As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body. For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.
Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense. Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with. Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state. The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide. Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time. As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge. Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self. Can we begin to mold and sculpt the body much like we have done with the land we inhabit?
There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori. The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement. In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities. Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain. As you browse the work of The Biological Canvas I would like to invite your own imagination to engage. Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.
Source | VASA Project
Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech — and discovered that they are the same as those identified in non-human primates.
With the help of 13 human volunteers who spent time in a functional MRI machine, the researchers showed that both human and non-human primates process speech along two parallel pathways, each of which run from lower to higher functioning neural regions. These pathways are dubbed the “what” and “where” streams and are roughly analogous to how the brain processes sight, but in different regions. The “where” stream localizes sound and the “what” pathway identifies the sound.
The researchers identified the three distinct areas in the “what” pathway in humans that had been seen in non-human primates. Only two had been recognized before in previous human studies. The first, and most primary, is the “core” that analyzes tones at the basic level of simple frequencies. The second area, the “belt,” wraps around the core, and integrates several tones, “like buzz sounds,” that lie close to each other, the researchers said. The third area, the “parabelt,” responds to speech sounds such as vowels, which are essentially complex bursts of multiple frequencies.
The discovery could offer important insights into what can go wrong when someone has difficulty speaking, which involves hearing voice-generated sounds, or understanding the speech of others, the researchers said.
Source | Kurzweil AI
A yet-unidentified component of coffee interacts with caffeine, a possible reason why daily coffee intake protects against Alzheimer’s disease, researchers at the University of South Florida have found.
One clue: they found that caffeinated coffee induces an increase in blood levels of a growth factor called GCSF (granulocyte colony stimulating factor) in mice. GCSF is greatly decreased in patients with Alzheimer’s disease and is demonstrated to improve memory in Alzheimer’s mice.
The researchers said this is not possible with other caffeine-containing drinks or decaffeinated coffee.
An increasing body of scientific literature indicates that moderate consumption of coffee also decreases the risk of Alzheimer’s, Parkinson’s disease, Type II diabetes, and stroke. Recent studies have reported that drinking coffee in moderation may also significantly reduce the risk of breast and prostate cancers.
The researchers suggest that moderate daily coffee intake (4 to 5 cups a day) starting at least by middle age (30s–50s) is optimal for providing protection against Alzheimer’s disease, although from their studies, starting even in older age appears protective.
Source | Kurzweil AI
Neuroscientists at MIT’s Picower Institute for Learning and Memory have found that cognitive capacity limitations (the ability to hold about four things in our minds at once) reflect a dual model of working memory.
The researchers investigated the neural basis of this capacity limitation in two monkeys performing the same test used to explore working memory in humans. First, the researchers displayed an array of two to five colored squares, then a blank screen, and then the same array in which one of the squares changed color. The task was to detect this change and look at the changed square.
As the monkeys performed this task, the researchers recorded simultaneously from neurons in two brain areas related to encoding visual perceptions (the parietal cortex) and holding them in mind (the prefrontal cortex). As expected, the more squares in the array, the worse the performance.
They determined that monkeys (and by extension humans) do not have a capacity of four objects, but of two in each hemisphere. If the object to remember appears on the right side of the visual space, it does not matter how many objects are on the left side; as long as the right side contains only two, the monkeys can easily remember an object on the right side. Or if the right side contains three objects and the left side only one, their capacity for remembering the key object on the right is exceeded and so they may forget it.
The fact that we have different capacities in each hemisphere implies that we should present information (for example, in heads-up displays) in a way that does not overtax one hemisphere while under-taxing the other, the researchers said.
Source | Kurzweil AI
It seems the sci-fi industry has done it again. Predictions made in novels like Johnny Mnemonic and Neuromancer back in the 1980s of neural implants linking our brains to machines have become a reality.
Back then it seemed unthinkable that we’d ever have megabytes stashed in our brain as Keanu Reeves’ character Johnny Mnemonic did in the movie based on William Gibson’s novel. Or that The Matrix character Neo could have martial arts abilities uploaded to his brain, making famous the line, “I know Kung Fu.” (Why Keanu Reeves became the poster boy of sci-fi movies, I’ll never know.) But today we have macaque monkeys that can control a robotic arm with thoughts alone. We have paraplegics given the ability to control computer cursors and wheelchairs with their brain waves. Of course this is about the brain controlling a device. But what about the other direction where we might have a device amplifying the brain? While the cochlear implant might be the best known device of this sort, scientists have been working on brain implants with the goal to enhance memory. This sort of breakthrough could lead to building a neural prosthesis to help stroke victims or those with Alzheimer’s. Or at the extreme, think uploading Kung Fu talent into our brains.
Decade-long work led by Theodore Berger at University of Southern California, in collaboration with teams from Wake Forest University, has provided a big step in the direction of artificial working memory. Their study is finally published today in the Journal of Neural Engineering. A microchip implanted into a rat’s brain can take on the role of the hippocampus—the area responsible for long-term memories—encoding memory brain wave patterns and then sending that same electrical pattern of signals through the brain. Back in 2008, Berger told Scientific American, that if the brain patterns for the sentence, “See Spot Run,” or even an entire book could be deciphered, then we might make uploading instructions to the brain a reality. “The kinds of examples [the U.S. Department of Defense] likes to typically use are coded information for flying an F-15,” Berger is quoted in the article as saying.
In this current study the scientists had rats learn a task, pressing one of two levers to receive a sip of water. Scientists inserted a microchip into the rat’s brain, with wires threaded into their hippocampus. Here the chip recorded electrical patterns from two specific areas labeled CA1 and CA3 that work together to learn and store the new information of which lever to press to get water. Scientists then shut down CA1 with a drug. And built an artificial hippocampal part that could duplicate such electrical patterns between CA1 and CA3, and inserted it into the rat’s brain. With this artificial part, rats whose CA1 had been pharmacologically blocked, could still encode long-term memories. And in those rats who had normally functioning CA1, the new implant extended the length of time a memory could be held.
The next step is to test the device in monkeys, and then in humans. Of course at this early stage a breakthrough like this brings up more questions than solutions. Memory is hugely complex, based on our individual experiences and perceptions. If we have the electrical pattern for the phrase, See Spot Run, mentioned above, would this mean the same thing for you as it does for me? How would such a device work within context? As writer Gary Stix asked in the Scientific American article, “Would “See Spot Run” be misinterpreted as laundry mishap instead of a trotting dog?” Or as the science journalist John Horgan once put it, you might hear your wedding song, but I hear a stale pop tune.
We are provided with the same structural blueprint for our brains, but its circuitry is built from experience and genetics, and this is a tapestry unique to each of us. Something that many scientists feel we’ll never be able to fully crack and decode, let alone insert into it an experiential memory.
Source | Smart Planet
Mental imagery is related to our perception of the external world, according to a new study of how the brain processes images.
Joel Pearson of the University of New South Wales and colleagues asked participants to imagine a green circle with vertical lines or a red circle with horizontal lines, and rate how vivid the mental image was and how difficult it was to conjure. They then presented the subjects with a binocular rivalry display, where the left and right eyes each see a different pattern, and asked them to report which pattern their brain settled on as dominant.
The researchers found that the pattern that participants reported as being most vivid when imagined was the same as the one that dominated in binocular rivalry. They suggest that this supports the idea that internal mental images are closely related to how brains perceive the external world.
Source | Kurzweilai
A team of scientists has assembled in Switzerland and Germany to pursue a unique goal – the building of a computer model of a human brain.
Called the Human Brain Project – but perhaps inevitably dubbed ‘Team Frankenstein’ in the media – it is in discussion with the EU for a £1billion grant.
Scientists claim success may lead to cures for various diseases like Parkinson’s.
It could also lead to intelligent robots and supercomputers which would dwarf those currently in existence.
Henry Markram, a neuroscientist at the École Polytechnique Fédérale in Lausanne, Switzerland, has assembled a team of nine top European scientists for the research effort.
‘This is one of the three grand challenges for humanity. We need to understand earth, space and the brain. We need to understand what makes us human.’ Markram told Germany’s Spiegel magazine.
The scientists and researchers working with the Human Brain Project believe that if they secure the funding, they will be able to replicate mankind’s most vital organ in 12 years.
The applications for it if successful are enormous; drug companies for instance would be able to dramatically shorten testing times by bypassing humans to test new medicaments on the computer model.
Supercomputers at the Jülich Research Center near Cologne are earmarked to play a vital role in the research which Makram says will involve ‘a tsunami of data.’
Jülich neuroscientist Katrin Amunts has begun work on a detailed atlas of the brain which involved slicing one into 8,000 parts which were then digitalized with a scanner.
Makram added: ‘It is not impossible to build a human brain. We can do it in just over 10 years.
‘This will, when successful, help two billion people annually who suffer from some type of brain impairment.’
The project has already created an artificial neocortical column which is unique to mammals.
We have many of these columns to cope with complex cognitive functions including parenthood and social interactions.
It was digitally constructed using a software model of tens of thousands of neurons.