Archive for November, 2010

Where Cinema and Biology Meet

Tuesday, November 16th, 2010

When Robert A. Lue considers the “Star Wars” Death Star, his first thought is not of outer space, but inner space.

“Luke’s initial dive into the Death Star, I’ve always thought, is a very interesting way how one would explore the surface of a cell,” he said.

That particular scene has not yet been tried, but Dr. Lue, a professor of cell biology and the director of life sciences education at Harvard, says it is one of many ideas he has for bringing visual representations of some of life’s deepest secrets to the general public.

Dr. Lue is one of the pioneers of molecular animation, a rapidly growing field that seeks to bring the power of cinema to biology. Building on decades of research and mountains of data, scientists and animators are now recreating in vivid detail the complex inner machinery of living cells.

The field has spawned a new breed of scientist-animators who not only understand molecular processes but also have mastered the computer-based tools of the film industry.

“The ability to animate really gives biologists a chance to think about things in a whole new way,” said Janet Iwasa, a cell biologist who now works as a molecular animator at Harvard Medical School.

Dr. Iwasa says she started working with visualizations when she saw her first animated molecule five years ago. “Just listening to scientists describe how the molecule moved in words wasn’t enough for me,” she said. “What brought it to life was really seeing it in motion.”

In 2006, with a grant from the National Science Foundation, she spent three months at the Gnomon School of Visual Effects, an animation boot camp in Hollywood, where, while she worked on molecules, her colleagues, all male, were obsessed with creating monsters and spaceships.

To compose her animations, Dr. Iwasa draws on publicly available resources like the Protein Data Bank, a comprehensive and growing database containing three-dimensional coordinates for all of the atoms in a protein. Though she no longer works in a lab, Dr. Iwasa collaborates with other scientists.

“All that we had before — microscopy, X-ray crystallography — were all snapshots,” said Tomas Kirchhausen, a professor in cell biology at Harvard Medical School and a frequent collaborator with Dr. Iwasa. “For me, the animations are a way to glue all this information together in some logical way. By doing animation I can see what makes sense, what doesn’t make sense. They force us to confront whether what we are doing is realistic or not.” For example, Dr. Kirchhausen studies the process by which cells engulf proteins and other molecules. He says animations help him picture how a particular three-legged protein called clathrin functions within the cell.

If there is a Steven Spielberg of molecular animation, it is probably Drew Berry, a cell biologist who works for the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia. Mr. Berry’s work is revered for artistry and accuracy within the small community of molecular animators, and has also been shown in museums, including the Museum of Modern Art in New York and the Centre Pompidou in Paris. In 2008, his animations formed the backdrop for a night of music and science at the Guggenheim Museum called “Genes and Jazz.”

“Scientists have always done pictures to explain their ideas, but now we’re discovering the molecular world and able to express and show what it’s like down there,” Mr. Berry said. “Our understanding is just exploding.”

In October, Mr. Berry was awarded a 2010 MacArthur Fellowship, which he says he will put toward developing visualizations that explore the patterns of brain activity related to human consciousness.

The new molecular animators are deeply aware that they are picking up where many talented scientist-artists left off. They are quick to pay homage to pioneers in molecular graphics like Arthur J. Olson and David Goodsell, both at the Scripps Research Institute in San Diego.

Perhaps the pivotal moment for molecular animations came four years ago with a video called “The Inner Life of the Cell.” Produced by BioVisions, a scientific visualization program at Harvard’s Department of Molecular and Cellular Biology, and a Connecticut-based scientific animation company called Xvivo, the three-minute film depicts marauding white blood cells attacking infections in the body. It was shown at the 2006 Siggraph conference, an annual convention of digital animation. After it was posted on YouTube, it garnered intense media attention.

BioVisions’ most recent animation, called “Powering the Cell: Mitochondria,” was released in October. It delves inside the complex molecules that reside in our cells and convert food into energy. Produced in high definition, “Powering the Cell” takes viewers on a swooping roller coaster ride through the microscopic machinery of the cell.

Sophisticated programs like Maya allow animators to create vibrant worlds from scratch, but that isn’t always necessary or desirable in biology. A company called Digizyme in Brookline, Mass., has developed a way for animators to pull data directly into Maya from the Protein Data Bank so that many of the over 63,000 proteins in the database can be easily rendered and animated.

Gaël McGill, Digizyme’s chief executive, says access to this data is critical to scientific accuracy. “For us the starting point is always the science,” Dr. McGill said. “Do we have data to support the image we’re going to create?”

Indeed, while enthusiasm runs high among those directly involved in the field, others in the scientific community are uncertain about the value of these animations for actual scientific research. While acknowledging the potential to help refine a hypothesis, for example, some scientists say that visualizations can quickly veer into fiction.

“Some animations are clearly more Hollywood than useful display,” says Peter Walter, an investigator at the Howard Hughes Medical Institute in San Francisco. “It can become hard to distinguish between what is data and what is fantasy.”

Dr. McGill acknowledges that showing cellular processes can involve a significant dose of conjecture. Animators take liberty with color and space, among other qualities, in order to highlight a particular function or part of the cell. “All the events we are depicting are so small they are below the wavelength of light,” he said.

But he contends that these visualizations will be increasingly necessary in a world awash in data. “In the face of increasing complexity, and increasing data, we’re faced with a major problem,” Dr. McGill said.

Certainly, it will play a significant part in education. The Harvard biologist E.O. Wilson is leading a project to develop the next generation of digital biology textbook that will integrate complex visualizations as a core part of the curriculum. Called “Life on Earth,” the project will include visualizations from Mr. Berry and is being overseen by Dr. McGill, who believes it could change how students learn biology.

“I think visualization is going to be the key to the future,” Dr. McGill said.

Source | New York Times

Treading the Circuit Boards: Robot Actress Takes to the Stage

Tuesday, November 16th, 2010

A life-like robot called Geminoid F makes its acting debut in Japan. Does this spell the end for “real” performers?

The robot is an exact copy of a woman in her 20s and is able to smile and frown (though not at the same time…so fret not, Keira Knightley!). What’s more, the debut performance in Osaka went down well with the audience, if not its human peers. “I kind of feel like I’m alone on stage,” said female co-star of Sayonara, Bryerly Long. “There’s a bit of distance … [and not] a human presence.”

(See pictures of cinema’s most memorable robots.)

Geminoid F is seated for the duration of this short play with her actions controlled from backstage by a human. Japanese roboticist Hiroshi Ishiguro is responsible for Geminoid F and has previously made models of himself and his four-year-old daughter.

(See TIME’s best inventions of 2010.)

And while this might seem like a novelty, the fact remains that the attractive retail price of ¥10m (roughly $120,000) could persuade low-budget movie producers to take the plunge as the inevitable publicity and curiosity factor will end up paying for itself. But just wait until the Geminoid F’s of this world start insisting on huge trailers, fresh flowers at all times and warm milk being stirred counter-clockwise (which NewsFeed was once privy to witnessing in front of A Very Famous Star) and we’ll surely revert to human divas. But for now, the future is yet again here and it’s oh so scary.

Source | The News Feed

Lockheed Martin tests next-generation design of its robotic exoskeleton

Sunday, November 14th, 2010

Lockheed Martin recently began laboratory testing of an improved next-generation design of its HULC advanced robotic exoskeleton.

The testing now under way will validate the ruggedized system’s capabilities and reliability in a variety of simulated battlefield conditions, and brings HULC a step closer to readiness to support troops on the ground and others who must carry heavy loads.

HULC is an untethered, battery powered, hydraulic-actuated anthropomorphic exoskeleton capable of performing deep squats, crawls and upper-body lifting with minimal human exertion.

It is designed to transfer the weight from heavy loads to the ground through the robotic legs of the lower-body exoskeleton, taking the weight off of the operator.

An advanced onboard microcomputer ensures the exoskeleton moves in concert with the operator.

Lockheed Martin further refined the HULC’s form and fit, allowing the operator to adapt to the exoskeleton in less time. The ruggedized structure allows for rapid, repeatable adjustments to the torso and thigh length, without special tools, to better suit a wider variety of users.

It also conforms to the body and incorporates lumbar padding for comfort and support. Additionally, the upgraded HULC features improved control software to better track the user’s movements.

Source | Lockheed Martin

How to erase a memory

Sunday, November 14th, 2010

Researchers working with mice have discovered that by removing a protein from the region of the brain responsible for recalling fear, they can permanently delete traumatic memories.

“When a traumatic event occurs, it creates a fearful memory that can last a lifetime and have a debilitating effect on a person’s life,” says Richard L. Huganir, Ph.D., professor and director of neuroscience at the Johns Hopkins University School of Medicine and a Howard Hughes Medical Institute investigator. “Our finding describing these molecular and cellular mechanisms involved in that process raises the possibility of manipulating those mechanisms with drugs to enhance behavioral therapy for such conditions as post-traumatic stress disorder.”

Huganir and postdoctoral fellow Roger Clem focused on the nerve circuits in the amygdala, the part of the brain known to underly so-called fear conditioning in people and animals. Using sound to cue fear in mice, they observed that certain cells in the amygdala conducted more current after the mouse was exposed to a loud, sudden tone.

In hopes of understanding the molecular underpinnings of fear memory formation, the team further examined the proteins in the nerve cells of the amygdala before and after exposure to the loud tone. They found temporary increases in the amount of particular proteins — the calcium-permeable AMPARs — within a few hours of fear conditioning that peaked at 24 hours and disappeared 48 hours later.

Because these particular proteins are uniquely unstable and can be removed from nerve cells, the scientists proposed that they might permanently remove fear by combining behavior therapy and protein removal and provide a window of opportunity for treatment.  “The idea was to remove these proteins and weaken the connections in the brain created by the trauma, thereby erasing the memory itself,” says Huganir.

In further experiments, they found that removal of these proteins depends on the chemical modification of the GluA1 protein. Mice lacking this chemical modification of GluA1 recovered fear memories induced by loud tones, whereas littermates that still had normal GluA1 protein did not recover the same fear memories. Huganir suggests that drugs designed to control and enhance the removal of calcium-permeable AMPARs may be used to improve memory erasure.

“This may sound like science fiction, the ability to selectively erase memories,” says Huganir. “But this may one day be applicable for the treatment of debilitating fearful memories in people, such as post-traumatic stress syndrome associated with war, rape or other traumatic events.”

This study was funded by the National Institutes of Health and the Howard Hughes Medical Institute.

Source | John Hopkins Medicine

Learn in your Sleep

Sunday, November 14th, 2010

A new study published in the Journal of Neuroscience by researchers at the University of York and Harvard Medical School suggests that sleep helps people to remember a newly learned word and incorporate new vocabulary into their “mental lexicon.”

During the study, which was funded by the Economic and Social Research Council, researchers taught volunteers new words in the evening, followed by an immediate test. The volunteers slept overnight in the laboratory while their brain activity was recorded using an electroencephalogram, or EEG. A test the following morning revealed that they could remember more words than they did immediately after learning them, and they could recognize them faster, demonstrating that sleep had strengthened the new memories.

When the researchers examined whether the new words had been integrated with existing knowledge in the mental lexicon, they discovered the involvement of a different type of activity in the sleeping brain. Sleep spindles are brief but intense bursts of brain activity that reflect information transfer between different memory stores in the brain — the hippocampus deep in the brain and the neocortex, the surface of the brain.

Memories in the hippocampus are stored separately from other memories, while memories in the neocortex are connected to other knowledge. Volunteers who experienced more sleep spindles overnight were more successful in connecting the new words to the rest of the words in their mental lexicon, suggesting that the new words were communicated from the hippocampus to the neocortex during sleep.

Co-author of the paper, Professor Gareth Gaskell, of the University of York’s Department of Psychology, said: “We suspected from previous work that sleep had a role to play in the reorganization of new memories, but this is the first time we’ve really been able to observe it in action, and understand the importance of spindle activity in the process.”

These results highlight the importance of sleep and the underlying brain processes for expanding vocabulary. But the same principles are likely to apply to other types of learning.

Lead author, Dr Jakke Tamminen, said: “New memories are only really useful if you can connect them to information you already know. Imagine a game of chess, and being told that the rule governing the movement of a specific piece has just changed. That new information is only useful to you once you can modify your game strategy, the knowledge of how the other pieces move, and how to respond to your opponent’s moves. Our study identifies the brain activity during sleep that organizes new memories and makes those vital connections with existing knowledge.”

Source | EurekAlert

First 3D-Printed Car Hits The Road

Sunday, November 14th, 2010

The Urbee has been an Automotive X Prize candidate and will be on The Discovery Channel’s Canadian flagship Daily Planet. The car, designed by Kor Ecologic of Winnipeg, Canada, is an electric / liquid-fuel hybrid that will get the equivalent of over 200 mpg on the highway and 100 MPG in the city.

But it is also the first car ever to have its entire body printed out on a giant 3D printer.

According to a press release from Stratasys:

Urbee is the first prototype car ever to have its entire body 3D printed with an additive process. All exterior components – including the glass panel prototypes – were created using Dimension 3D Printers and Fortus 3D Production Systems at Stratasys’ digital manufacturing service – RedEye on Demand.

The designers at Kor point out the benefits of Fused Deposition Modelling:

“Our goal in designing it was to be as ‘green’ as possible throughout the design and manufacturing processes. FDM technology from Stratasys has been central to meeting that objective. FDM lets us eliminate tooling, machining, and handwork, and it brings incredible efficiency when a design change is needed. If you can get to a pilot run without any tooling, you have advantages.”

The implications for building prototypes are obvious; you go straight from computer to finished part in a lot less time. But imagine a few years down the road, when everyone might order up the car body of their choice from a catalogue and just bolt it on a standard chassis. Ding the side? Just print up a replacement.

Goals of the Urbee Project:

1. Use the least amount of energy possible for every kilometre traveled.
2. Cause as little pollution as possible during manufacturing, operation and recycling of the car.
3. Use materials available as close as possible to where the car is built.
4. Use materials that can be recycled again and again.
5. Use parts and materials that last as long as possible.
6. Be simple to understand, build, and repair.
7. Be as safe as possible to drive.
8. Meet the standards and regulations applicable to traditional cars.
9. Be buildable in small quantities so we don’t have to wait for it to become more widely accepted before we can begin manufacturing it for the public.
10. Be mass-producable so it can be built more economically once it becomes more widely accepted.
11. Be affordable.
12. Be visually appealing.

Source | Treehugger

Three-dimensional moving holograms breakthrough announced

Sunday, November 14th, 2010

A team led by University of Arizona (UA) optical sciences professor Nasser Peyghambarian has developed a new type of “holographic telepresence” that allows remote projection of a three-dimensional, moving image without the need for special eyewear such as 3D glasses or other auxiliary devices.

The technology is likely to take applications ranging from telemedicine, advertising, updatable 3D maps and entertainment to a new level.

The journal Nature chose the technology to feature on the cover of its Nov. 4 issue.

“Holographic telepresence means we can record a three-dimensional image in one location and show it in another location, in real-time, anywhere in the world,” said Peyghambarian, who led the research effort.

“Holographic stereography has been capable of providing excellent resolution and depth reproduction on large-scale 3D static images,” the authors wrote, “but has been missing dynamic updating capability until now.”

“At the heart of the system is a screen made from a novel photorefractive material, capable of refreshing holograms every two seconds, making it the first to achieve a speed that can be described as quasi-real-time,” said Pierre-Alexandre Blanche, an assistant research professor in the UA College of Optical Sciences and lead author of the Nature paper.

The prototype device uses a 10-inch screen, but Peyghambarian’s group is already successfully testing a much larger version with a 17-inch screen. The image is recorded using an array of regular cameras, each of which views the object from a different perspective. The more cameras that are used, the more refined the final holographic presentation will appear.

That information is then encoded onto a fast-pulsed laser beam, which interferes with another beam that serves as a reference. The resulting interference pattern is written into the photorefractive polymer, creating and storing the image. Each laser pulse records an individual “hogel” in the polymer. A hogel (holographic pixel) is the three-dimensional version of a pixel, the basic units that make up the picture.

The hologram fades away by natural dark decay after a couple of minutes or seconds depending on experimental parameters. Or it can be erased by recording a new 3D image, creating a new diffraction structure and deleting the old pattern.

Peyghambarian explained: “Let’s say I want to give a presentation in New York. All I need is an array of cameras here in my Tucson office and a fast Internet connection. At the other end, in New York, there would be the 3D display using our laser system. Everything is fully automated and controlled by computer. As the image signals are transmitted, the lasers inscribe them into the screen and render them into a three-dimensional projection of me speaking.”

The overall recording setup is insensitive to vibration because of the short pulse duration and therefore suited for industrial environment applications without any special need for vibration, noise or temperature control.

One of the system’s major hallmarks never achieved before is what Peyghambarian’s group calls full parallax: “As you move your head left and right or up and down, you see different perspectives. This makes for a very life-like image. Humans are used to seeing things in 3D.”

The work is a result of a collaboration between the UA and Nitto Denko Technical, or NDT, a company in Oceanside, Calif. NDT provided the polymer sample and media preparation. “We have made major advances in photorefractive polymer film fabrication that allow for the very interesting 3D images obtained in our Nature article,” said Michiharu Yamamoto, vice president at NDT and co-author of the paper.

Potential applications of holographic telepresence include advertising, updatable 3D maps and entertainment. Telemedicine is another potential application: “Surgeons at different locations around the world can observe in 3D, in real time, and participate in the surgical procedure,” the authors wrote.

The system is a major advance over computer-generated holograms, which place high demands on computing power and take too long to be generated to be practical for any real-time applications.

Currently, the telepresence system can present in one color only, but Peyghambarian and his team have already demonstrated multi-color 3D display devices capable of writing images at a faster refresh rate, approaching the smooth transitions of images on a TV screen. These devices could be incorporated into a telepresence setup in the near future.

Source | University of Arizona

Phantom images stored in flexible network throughout brain

Sunday, November 14th, 2010

Brain research over the past 30 years has shown that if a part of the brain controlling movement or sensation or language is lost because of a stroke or injury, other parts of the brain can take over the lost function – often as well as the region that was lost.

New research at the University of California, Berkeley, shows that this holds true for memory and attention as well, though — at least for memory — the intact brain helps out only when needed and conducts business as usual when it’s not.

These results support the hypothesis that memory is not stored in one place, but rather, is distributed in many regions of the brain, which means that damage to one storage area is easier to compensate for.

“It’s not just specific regions, but a whole network, that’s supporting memory,” said Bradley Voytek, a UC Berkeley postdoctoral fellow in the Helen Wills Neuroscience Institute and first author of two recent journal articles describing EEG (electroencephalogram) studies of people with strokes. Voytek recently completed his Ph.D. in neuroscience at UC Berkeley.

“The view has always been, if you lose point A, point B will be on all the time to take over,” said co-author Dr. Robert Knight, UC Berkeley professor of psychology and head of the Wills Institute. “Brad has shown that’s not true. It actually only comes on if it’s needed.

“Most of the time, it acts like a normal piece of brain tissue. It only kicks into hyperdrive when the bad part of the brain is particularly challenged, and it does it in less than a second. This is a remarkably fluid neural plasticity, but it isn’t the standard ‘B took over for A,’ it’s really ‘B will take over if and when needed.’”

One of the papers, published Nov. 3 in the online edition of Neuron and scheduled for the Nov. 4 print issue of the journal, describes a study of stroke patients who have lost partial function in their prefrontal cortex, the area at the top front of each hemisphere of the brain that governs memory and attention.

Voytek put electrodes on the scalps of six stroke patients as well as six controls with normal prefrontal cortex function, and showed each patient a series of pictures to test his or her ability to remember images for a brief time, so-called visual working memory. Visual working memory is what allows us to compare two objects, keeping one in memory while we look at another, as when we choose the ripest of two bananas.

“We presented each subject with a really quick flash of a visual stimulus and then showed them a second one a little while later, and they had to say whether it was the same as the first,” Voytek explained. “The idea is that you’re building a representation of your visual world somehow in your brain — and we don’t know how that happens — so that later you can compare this internal phantom representation you’re holding in your mind to a real world visual stimulus, something you actually see. These patients can’t do that as well.”

EEGs provide millisecond measurements of brain activity, though they do not pinpoint active areas as precisely as other techniques, such as functional magnetic resonance imaging (fMRI). On the other hand, fMRI averages brain activity over seconds, making it impossible to distinguish split-second brain processes or even tell which occur first.

The neuroscientists discovered that when images were shown to the eye opposite the lesion (output of the left eye goes to the right hemisphere, and vice versa), the damaged prefrontal cortex did not respond, but the intact prefrontal cortex on the same side as the image responded within 300 to 600 milliseconds.

“EEG, which is very good for looking at the timing of activity in the brain, showed that part of the brain is compensating on a subsecond basis,” Voytek said. “It is very rapid compensation: Within a second of challenging the bad side, the intact side of the brain is coming online to pick up the slack.”

“This has implications for what physicians measure to see if there’s effective recovery after stroke,” Knight said, “and suggests that you can take advantage of this to train the area you would like to take over from a damaged area instead of just globally training the brain.”

In a second paper that appeared online Oct. 4 in the journal Proceedings of the National Academy of Sciences, Voytek and Knight looked at visual working memory in patients with damage not only to the prefrontal cortex, but also to the basal ganglia. The basal ganglia are a pair of regions directly below the brain’s cortex that are involved in motor control and learning and that are impaired in patients with Parkinson’s disease.

The patients with stroke damage to the prefrontal cortex had, as suspected, problems when images were presented to the eye on the side opposite the lesion. Those with basal ganglia damage, however, had problems with visual working memory no matter which part of the visual field was shown the image.

“The PNAS paper shows that the basal ganglia lesions cause a more broad network deficit, whereas the prefrontal cortex lesions cause a more within-hemisphere deficit in memory,” Voytek said. “This demonstrates, again, that memory is a network phenomenon rather than a specifically regional phenomenon.”

“If you take out one basal ganglia, the logic would be that you would be Parkinsonian on half your body. But you’re not,” Knight said. “One basal ganglia on one side is able to somehow control fluid movement on both sides.”

“Brad’s data show that for cognitive control, it’s just the opposite. One small basal ganglia lesion on one side has global effects on both sides of your body,” he added. “This really points out that for this deep subcortical basal ganglia area, you need all of it to function normally. I don’t think anybody would have really suspected that.”

Knight hopes to conduct follow up studies using direct recordings from electrodes in the brain to further explore the various brain regions involved in visual memory and other types of memory and attention governed by the prefrontal cortex.

“Cognition and memory are the highest forms of human behavior,” Knight said. “It is not just about raising or lowering your hand, or whether you can or cannot see. These are the things that make us human, and that is what makes it so interesting for us.”

Other coauthors of the Neuron paper are Matar Davis and Elena Yago of UC Berkeley’s Helen Wills Neuroscience Institute; Francisco Barceló of the Institut Universitari d’Investigació en Ciències de la Salut at the Universitat de les Illes Balears in Palma de Mallorca, Spain; and Edward K. Vogel of the University of Oregon in Eugene.

The work was supported by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health, and by an American Psychological Association Diversity Program in Neuroscience grant to Voytek.

Source | UC Berkeley

Researchers Engineer Miniature Human Livers in the Lab

Sunday, November 14th, 2010

Researchers at the Institute for Regenerative Medicine at Wake Forest University Baptist Medical Center are the first to use human liver cells to successfully engineer miniature livers that function – at least in a laboratory setting — like human livers. The next step is to see if the livers will continue to function after transplantation in an animal model.

The ultimate goal of the research, presented at the annual meeting of the American Association for the Study of Liver Diseases in Boston and published in an upcoming issue of the journal Hepatology, is to provide a solution to the shortage of donor livers available for patients who need transplants. Laboratory-engineered livers could also be used to test the safety of new drugs.

“We are excited about the possibilities this research represents, but must stress that we’re at an early stage and many technical hurdles must be overcome before it could benefit patients,” said Shay Soker, Ph.D., professor of regenerative medicine and project director. “Not only must we learn how to grow billions of liver cells at one time in order to engineer livers large enough for patients, but we must determine whether these organs are safe to use in patients.”

Pedro Baptista, PharmD, Ph.D., lead author on the study, said the project is the first time that human liver cells have been used to engineer livers in the lab. “Our hope is that once these organs are transplanted, they will maintain and gain function as they continue to develop,” he said.

The engineered livers, which are about an inch in diameter and weigh about .20 ounces, would have to weigh about one pound to meet the minimum needs of the human body, said the scientists. Even at this larger size, the organs wouldn’t be as large as human livers, but would likely provide enough function. Research has shown that human livers functioning at 30 percent of capacity are able to sustain the human body.

To engineer the organs, the scientists used animal livers that were treated with a mild detergent to remove all cells (a process called decellularization), leaving only the collagen “skeleton” or support structure. They then replaced the original cells with two types of human cells: immature liver cells known as progenitors, and endothelial cells that line blood vessels.

The cells were introduced into the liver skeleton through a large vessel that feeds a system of smaller vessels in the liver. This network of vessels remains intact after the decellularization process. The liver was next placed in a bioreactor, special equipment that provides a constant flow of nutrients and oxygen throughout the organ.

After a week in the bioreactor system, the scientists documented the progressive formation of human liver tissue, as well as liver-associated function. They observed widespread cell growth inside the bioengineered organ.

The ability to engineer a liver with animal cells had been demonstrated previously. However, the possibility of generating a functional human liver was still in question.

The researchers said the current study suggests a new approach to whole-organ bioengineering that might prove to be critical not only for treating liver disease, but for growing organs such as the kidney and pancreas. Scientists at the Wake Forest Institute for Regenerative Medicine are working on these projects, as well as many other tissues and organs, and also working to develop cell therapies to restore organ function.

Bioengineered livers could also be useful for evaluating the safety of new drugs. “This would more closely mimic drug metabolism in the human liver, something that can be difficult to reproduce in animal models,” said Baptista.

Source | Wake Forest University Baptist Medical Center

Air Force Wants Neuroweapons to Overwhelm Enemy Minds

Sunday, November 14th, 2010

It sounds like something a wild-eyed basement-dweller would come up with, after he complained about the fit of his tinfoil hat. But military bureaucrats really are asking scientists to help them “degrade enemy performance” by attacking the brain’s “chemical pathway[s].” Let the conspiracy theories begin.

Late last month, the Air Force Research Laboratory’s 711th Human Performance Wing revamped a call for research proposals examining “Advances in Bioscience for Airmen Performance.” It’s a six-year, $49 million effort to deploy extreme neuroscience and biotechnology in the service of warfare.

One suggested research thrust is to use “external stimulant technology to enable the airman to maintain focus on aerospace tasks and to receive and process greater amounts of operationally relevant information.” (Something other than modafinil, I guess.) Another asks scientists to look into “fus[ing] multiple human sensing modalities” to develop the “capability for Special Operations Forces to rapidly identify human-borne threats.” No, this is not a page from The Men Who Stare at Goats.

But perhaps the oddest, and most disturbing, of the program’s many suggested directions is the one that notes: “Conversely, the chemical pathway area could include methods to degrade enemy performance and artificially overwhelm enemy cognitive capabilities.” That’s right: the Air Force wants a way to fry foes’ minds — or at least make ‘em a little dumber.

It’s the kind of official statement that’s seized on by anyone who is sure that the CIA planted a microchip in his head, or thinks that the Air Force is controlling minds with an antenna array in Alaska. The same could be said about the 711th’s call to “develo[p] technologies to anticipate, find, fix, track, identify, characterize human intent and physiological status anywhere and at anytime.”

The ideas may sound wild. They are wild. But the notions aren’t completely out of the military-industrial mainstream. For years, armed forces and intelligence community researchers have toyed with ways of manipulating minds. During the Cold War, the CIA and the military allegedly plied the unwitting with dozens of psychoactive drugs, in a series of zany (and sometimes dangerous) mind-control experiments. More recently, the Pentagon’s most revered scientific advisory board warned in 2008 that adversaries could develop enhancements to their “cognitive capabilities … and thus create a threat to national security.” The National Research Council and Defense Intelligence Agency followed suit, pushing for pharma-based tactics to weaken enemy forces. In recent months, the Pentagon has funded projects to optimize troop’s minds, prevent injuries, preemptively assess vulnerability to traumatic stress, and even conduct “remote control of brain activity using ultrasound.”

The Air Force is warning potential researchers that this project “may require top secret clearance.” They’ll also need a high tolerance for seemingly loony theories — sparked by the military itself.

Source | Wired

Scientists make human blood from human skin

Sunday, November 14th, 2010

In a major breakthrough, scientists at McMaster University in Canada have discovered how to make human blood from adult human skin.

The discovery, published Sunday in Nature, could mean that in the foreseeable future, people needing blood for surgery, cancer treatment or treatment of other blood conditions like anemia will be able to have blood directly created from a patch of their own skin to provide transfusions. Clinical trials could begin as soon as 2012.

Making blood from skin does not require the middle step of changing a skin stem cell into a pluripotent stem cell that could make many other types of human cells, then turning it into a blood stem cell. It eliminates several problems with embryonic stem cells, including ethical objections,  immune rejection, limited hospital resources, and limited quantities.

“We have shown this works using human skin. We know how it works and believe we can even improve on the process,” said Mick Bhatia, scientific director of McMaster’s Stem Cell and Cancer Research Institute in the Michael G. DeGroote School of Medicine. They plan to develop other types of human cell types from skin.

The discovery was replicated several times over two years using human skin from both young and old people to prove it works for any age of person.

John Kelton, hematologist and dean and vice-president of health sciences for McMaster University said: “I find this discovery personally gratifying for professional reasons. During my 30 years as a practicing blood specialist, my colleagues and I have been pleased to help care for cancer patients whose lives were saved by bone marrow transplants. For all physicians, but especially for the patients and their families, the illness became more frustrating when we were prevented from giving a bone marrow transplant because we could not find a perfect donor match in the family or the community. Dr. Bhatia’s discovery could permit us to help this important group of patients.”

“The pioneering findings published today are the first to demonstrate that human skin cells can be directly converted into blood cells, via a programming process that bypasses the pluripotent stage,” said Cynthia Dunbar, MD, Head, Molecular Hematopoiesis Section, Hematology Branch, National Heart, Lung and Blood Institute, U.S. National Institutes of Health. “Producing blood from a patient’s own skin cells, has the potential of making bone marrow transplant HLA matching and paucity of donors a thing of the past.

“Bhatia’s approach detours around the pluripotent stem cell stage and thus avoids many safety issues, increases efficiency, and also has the major benefit of producing adult-type l blood cells instead of fetal blood cells, a major advantage compared to the thus far disappointing attempts to produce blood cells from human ESCs or IPSCs.”

This research was funded by the Canadian Institutes of Health Research, the Canadian Cancer Society Research Institute, the Stem Cell Network and the Ontario Ministry of Research and Innovation.

Source | McMaster University

Electrical brain stimulation improves math skills

Sunday, November 14th, 2010

Are you bad at sums? Get muddled at the market? If so, you could benefit from a machine that improves your mathematical abilities. It’s not such a strange suggestion. Stimulating a particular area of the brain, it turns out, can improve numeracy for at least six months.

In 2007, Roi Cohen Kadosh at the University of Oxford and colleagues pinned down the area of the brain responsible for mathematical ability to the right parietal lobe, just above the right ear.

His team “short-circuited” this area using transcranial magnetic stimulation (TMS) – a stream of magnetic pulses which temporarily disables a targeted area of the brain. The result, they found, was that people’s ability to perform numerical tasks fell. In fact, their performance resembled people with dyscalculia, who have difficulty comprehending mathematics.

Now they have done the reverse, and improved the brain’s arithmetical abilities. To do this the team applied transcranial direct current stimulation (tDCS), a way of enhancing brain activity using an electric current, to the right parietal cortex while simultaneously using the opposite current to subdue activity in the left parietal cortex.

Number puzzle

tDCS changes the voltage across neurons and can make them more or less likely to fire. Cohen Kadosh’s team zapped volunteers while they were shown made-up symbols representing the numbers 1 to 9. Although the volunteers had no idea which symbols stood for which number at the start of the test, they gradually worked this out by performing tests in which they were asked which symbol was numerically higher than another, then, once they had given their answer, were given the correct answer.

After each session, which involved hundreds of such calculations, they were given tests to see how well they could perform mathematical calculations using the symbols. Those given tDCS learned the symbols faster and did better in the tests than those subjected to a sham procedure.

When the subjects were tested six months later, those who had been given tDCS still did better than those who hadn’t. “It is already known that tDCS affects neurotransmitters involved in learning, memory and plasticity, so we presume that these are being manipulated in this study to cause long-term changes in the brain,” says Cohen Kadosh.

While the results show an enhanced association between arbitrary symbols and numbers, they don’t necessarily isolate number skills because the effects of brain stimulation weren’t compared with a non-numerical task, says Christopher Chambers at the University of Cardiff, UK. “So while the results are exciting, I think it remains to be seen whether the effects are specific for numerical competence, or whether they translate to other abilities that depend on learning.”

Improved numeracy

“This isn’t going to turn you into a genius,” says Cohen Kadosh, “but it could be turned into a device to help children with poor numeracy skills improve their mathematical abilities”.

“I think this is a truly brilliant finding from an outstanding team and one which could have profound ramifications for future investigations of enhanced cognition by non-invasive brain stimulation,” says Allan Snyder, director of the Centre for the Mind at the University of Sydney, Australia, who was not involved in the study. “Repeated applications of tDCS are believed by some to have had a long-term effect on mitigating depression, so I am not altogether surprised by their finding of long-lasting effects.”

Source | New Scientist

Home security robots hit the market

Sunday, November 14th, 2010

When Robert Oschler, a programmer, leaves his home, he knows it is secure. And if he ever has cause for concern, he can open his laptop and survey the house through the eyes of his watchdogs.

“I don’t have any pets. I just have pet robots, and they’re pretty well behaved,” Oschler said. “Fortunately I’ve never logged in and seen a human face.”

His robot, a modified version of the Rovio from WowWee, has a camera, microphone and speakers atop a three-wheeled platform. From anywhere with a Net connection, he can send his robot zipping around the house, returning a video signal along the way.

“As creepy as it sounds, you could even talk to the guy and say, ‘Get out of there. There’s nothing valuable. I’m calling the police,’ ” he said.

For all its power and ability, the Rovio is usually found in a store’s toy section for about $170. Other robots from toy makers, like Meccano, are there as well. Outfitting a house with a fleet of robot guards is no longer just for those with the wealth of Bond villains.

Home security is blossoming for toy makers who can match the technical power and flexibility of the computer industry with the mass-market prices that come from large production runs. Low prices are a trade-off, however, because many people find that the reliability of the lower-priced robots is adequate for home experimentation but far from ready for guarding Fort Knox.

“You should buy two,” said Oschler, who lives in South Florida.

The off-the-shelf unit is ready to explore after a simple installation involving the computer, but Oschler added a few enhancements to the software, which he distributes at His version improves the audio and video quality and offers more sophisticated programming options that create routines and paths for the robots to follow.

Oschler has even wired his robot to a headset that picks up the subtle electrical activity produced by his brain.

“When I tilt my head, the robot goes left. When you do that, it’s a Matrix-like moment,” he said proudly.

Other robot owners have modified their guard-bots, too. Peter Redmer, of Illinois, an online community manager at, said his site gathered the collective wisdom of the toy robots. One hobbyist in China, Qiaosong Wang, posted pictures of his Rovio after he added a small fire extinguisher and software that can detect the shape of fire.

“One of the goals is to create something that the consumer can enjoy without pricing it at $5,000 or $10,000 with military-grade technology,” Redmer said. Others have experimented with adding software for aiming the camera or enhancements like better lights for patrolling at night.

Redmer said he was most interested in the Parrot AR. Drone, a flying robot priced at $300. “It flies. How much cooler does it get?” he asked.

Not all of the innovation is attached to something that moves. Several companies are matching sophisticated artificial intelligence algorithms with video cameras. These systems monitor the video feed and sound alarms when objects of a certain shape appear.

I tried some software called Vitamin D that lets me watch my office. It raises flags — by beeping — whenever anyone walks in.

Source | Mercury News

Chip-in-a-pill may be approved in 2012

Sunday, November 14th, 2010

The first application of the chip-in-a-pill — or as it is officially known, the Ingestible Event Marker (IEM) — is expected to be for , to help avoid . A common problem that occurs after transplant operations is the dose and timing of taking anti-rejection drugs has to be monitored and frequently adjusted to prevent rejection of the transplanted organ, such as a kidney. The would overcome this problem since it would closely monitor the patients to determine if the drugs are being taken at the right time, and in the correct dosage.

In January this year Novartis spent $24 million on securing access to the ingestible medical microchips technology, which was invented and developed by a privately-owned Californian company, Proteus Biomedical. Licensing the technology puts Novartis ahead of all its competitors. The Proteus microchip is capable of collecting a range of biometric data such as heart rate, body temperature and body movements, which may indicate if drugs are working as intended.

Spokesman Dr. Trevor Mundel, the company’s Global Head of Development, said Novartis does not expect full clinical trials of the “smart pills” will be needed because the microchips will be added to existing drugs, and the company intends to carry out bioequivalence tests instead to show the effects of the pills are unchanged by the addition of a tiny microchip.

Mundel said the regulators had been encouraging and like the concept, but “they want to understand” how patients’ privacy will be protected in a system in which information is transmitted via wireless or Bluetooth technology from inside their bodies, and which could presumably therefore be intercepted by someone other than the doctor for whom it was intended.

Mr Mundel said the first application for the technology would be for anti-rejection drugs for transplant patients, but added he sees “the promise as going much beyond that.”

Source | Physorg

Augmented Reality Goggles

Sunday, November 14th, 2010

I held a black-and-white square of cardboard in my hand and watched as a dragon the size of a puppy appeared on top of it and roared at me. I watched a tiny Earth orbit around a real soda can, saw virtual balls fall through a digital gap in a table, and viewed a life-sized virtual human sitting in an empty chair.

What made these impressive special effects possible was a pair of augmented reality (AR) glasses—specifically, the Wrap 920AR glasses from Vuzix. Whereas virtual reality shows you only a digital landscape, augmented reality (AR) mixes virtual information, like text or images, into your view of the real world in real-time.

In the last few years, AR has started appearing on smart phones. In that context, software superimposes information on top of your view of the world as seen through the device’s screen. But AR eyewear, which provides a more immersive experience, has been confined to academic research and niche applications like medical and military training. That’s been largely because older AR hardware has been so bulky and has cost tens of thousands of dollars.

The Wrap 920AR from Vuzix, based in Rochester, New York, costs $1,995—about half the price of other AR goggles with similar image resolution. The company hopes that the glasses will appeal to gamers, animators, architects, and software developers, and it has developed software for building AR environments, which is included with the glasses.

Wearing the 920AR means looking at the world through a pair of LCD video displays. The 920AR is heavier than a regular pair of glasses but far lighter than other head-mounted virtual-reality displays I’ve tried. The displays are connected to two video cameras that sit outside of the glasses in front of the eyes. The screens show each eye a slightly different view of the world, mimicking natural human vision, which allows for depth perception. Accelerometers, gyro sensors, and magnetometers track the direction in which the wearer is looking. The glasses also come with ports that let users plug it into an iPhone for portable power and controls, such as loading a particular AR object or environment.

The Vuzix software can recognize and track visual markers (like the black-and-white piece of cardboard I held), or lock onto a certain object or color (like the soda can). Tracking works well as long as the pattern or object being tracked are visible to the cameras; tilting a tracking pattern too far will cause the virtual image to flicker. By tracking head movements, the software can make sure that virtual objects are perfectly positioned atop the real world.

“There are other folks who make stereo, see-through eyewear, but there’s no one making anything near the price point of Vuzix’s,” says Steve Feiner, professor of computer science at Columbia University, and a lead AR researcher since the 1990s. Feiner says that the integration of cameras and motion sensors into the display makes the glasses less bulky.

Blair MacIntyre, an associate professor at the Georgia Institute of Technology who works on AR games, notes that most researchers and companies are focusing on smart phones. “Very few people have been making head-mounted displays [for consumers] since cell phones became powerful,” he says.

However, MacIntyre notes that AR glasses are still more practical than phones in many situations. “Anything tool-oriented—medical, military, maintenance repair—will require head-worn displays,” he says, because people’s hands need to be free to do such tasks. MacIntyre also points out that discovering information about the world using AR would require looking through a device constantly, which is too cumbersome to do with a phone.

For AR glasses to become really popular, MacIntyre says, they will need to get lighter and better looking, and there will need to be worthwhile applications. “No one’s going to pay even $100 if there’s no application,” he says. MacIntyre thinks gaming could be a killer app for AR, and he says business or social media applications may also be popular. The Vuzix glasses are “kind of an intermediate step,” he says. “There won’t be a million people buying them, but I do think it’s a lot closer to what we need than anything else has been.”

Ultimately, it may be practical to incorporate AR into glasses without a builky display, by superimposing an image on a lens using optical components. “Clear glasses are a very old idea that go back to the earliest days of AR,” says Feiner. But it is more difficult to track the image that a person sees, and to accurately superimpose virtual objects on a clear display. Optical displays also have difficulty competing with ambient light.

MacIntyre believes even those who do not normally wear glasses may eventually find AR glasses appealing. “Ten years ago, if I told you that people would wear a big thing on their ear that blinks, no one would imagine that,” he says, referring to Bluetooth headsets. “The value outweighed the lack of aesthetics and the awkwardness.”

Source | Technology Review