Archive for December, 2009

Fascinated with Synthetic Biology

Monday, December 28th, 2009

Building a Search Engine of the Brain, Slice by Slice

Monday, December 28th, 2009

articlelarge.jpgSAN DIEGO — On a gray Wednesday afternoon here in early December, scientists huddled around what appeared to be a two-gallon carton of frozen yogurt, its exposed top swirling with dry-ice fumes.

As the square container, fixed to a moving platform, inched toward a steel blade mounted level with its surface, the group held its collective breath. The blade peeled off the top layer, rolling it up in slow motion like a slice of pale prosciutto.

“Almost there,” someone said.

Off came another layer, another, and another. And then there it was: a pink spot at first, now a smudge, now growing with every slice like spilled rosé on a cream carpet — a human brain. Not just any brain, either, but the one that had belonged to Henry Molaison, known worldwide as H. M., an amnesic who collaborated on hundreds of studies of memory and died last year at age 82. (Mr. Molaison agreed to donate his brain years ago, in consultation with a relative.)

“You can see why everyone’s so nervous,” said Jacopo Annese, an assistant professor of radiology at the University of California, San Diego, as he delicately removed a slice with an artist’s paintbrush and placed it in a labeled tray of saline solution. “I feel like the world is watching over my shoulder.”

And so it was: thousands logged on to view the procedure via live Webcast. The dissection marked a culmination, for one thing, of H. M.’s remarkable life, which was documented by Suzanne Corkin, a memory researcher at the Massachusetts Institute of Technology who had worked with Mr. Molaison for the last five decades of his life.

But it was also a beginning of something much larger, Dr. Annese and many other scientists hope. “The advent of brain imaging opened up so much,” said Sandra Witelson, a neuroscientist with the Michael G. DeGroote School of Medicine at McMaster University in Canada, who manages a bank of 125 brains, including Albert Einstein’s. “But I think in all the excitement people have forgotten how important the anatomical study of brain tissue still is, and this is the sort of project that could really restart interest in this area.”

The Brain Observatory at U.C. San Diego, set up to accept many donated brains, is an effort to bridge past and future. Brain dissection is a craft that goes back centuries and has helped scientists to understand where functions like language processing and vision are clustered, to compare gray and white matter and cell concentrations across different populations and to understand the damage done in ailments like Alzheimer’s disease and stroke.

Yet there is no single standard for cutting up a brain. Some researchers slice from the crown of the head down, parallel to the plane that runs through the nose and ears; others cut the organ into several chunks, and proceed to section areas of interest. No method is perfect, and any cutting can make it difficult, if not impossible, to reconstruct circuits that connect cells in disparate areas of the brain and somehow create a thinking, feeling mind.

To create as complete a picture as possible, Dr. Annese cuts very thin slices — 70 microns each, paper-thin — from the whole brain, roughly parallel with the plane of the forehead, moving from front to back. Perhaps the best-known pioneer of such whole-brain sectioning is Dr. Paul Ivan Yakovlev, who built a collection of slices from hundreds of brains now kept at a facility in Washington.

But Dr. Annese has something Dr. Yakovlev did not: advanced computer technology that tracks and digitally reproduces each slice. An entire brain produces some 2,500 slices, and the amount of information in each one, once microscopic detail is added, will fill about a terabyte of computer storage. Computers at U.C.S.D. are now fitting all those pieces together for Mr. Molaison’s brain, to create what Dr. Annese calls a “Google Earthlike search engine,” the first entirely reconstructed, whole-brain atlas available to anyone who wants to log on.

“We’re going to get the kind of resolution, all the way down to the level of single cells, that we have not had widely available before,” said Donna Simmons, a visiting scholar at the Brain Architecture Center at the University of Southern California. The thin whole-brain slicing “will allow much better opportunities to study the connection between cells, the circuits themselves, which we have so much more to learn about.”

Experts estimate that there are about 50 brain banks in the world, many with organs from medical patients with neurological or psychiatric problems, and some with a stock donated by people without disorders. “Ideally, anyone with the technology could do the same with their own specimens,” Dr. Corkin said.

The technical challenges, however, are not trivial. To prepare a brain for dissection, Dr. Annese first freezes it in a formaldehyde and sucrose solution, to about minus 40 degrees Celsius. The freezing in the case of H. M. was done over four hours, a few degrees at a time: the brain, like most things, becomes more brittle when frozen. It can crack.

Mr. Molaison lost his ability to form new memories after an operation that removed a slug-size chunk of tissue from deep in each hemisphere of his brain, making it more delicate than most.

“A crack would have been a disaster,” Dr. Annese said. It did not happen.

With the help of David Malmberg, a mechanical engineer at U.C.S.D. who had designed equipment for use in the Antarctic, the laboratory fashioned a metal collar to keep the suspended brain at just the right temperature. A few degrees too cold and the blade would chatter instead of cutting cleanly; too warm, and the blade wants to dip into the tissue. Mr. Malmberg held the temperature steady by pumping ethanol through the collars continually, at minus 40 degrees. He suspended the hoses using surfboard leashes picked up days before the dissection.

After the slicing and storing, a process that took some 53 hours, Dr. Annese’s laboratory will soon begin the equally painstaking process of mounting each slice in a glass slide. The lab will stain slides at regular intervals, to illustrate the features of the reconstructed organ. And it plans to provide slides for study. Outside researchers can request samples and use their own methods to stain and analyze the composition of specific high-interest areas.

“For the work I do, looking at which genes are preferentially expressed in different areas of the brain, this will be an enormous resource,” Dr. Simmons said.

If all goes as planned, and the Brain Observatory catalogs a diverse collection of normal and abnormal brains — and if, crucially, other laboratories apply similar techniques to their own collections — brain scientists will have data that will keep them busy for generations. In her own work, Dr. Witelson has found interesting anatomical differences between male and female brains; and, in Einstein’s brain, a parietal lobe, where spatial perception is centered, that was 15 percent larger than average.

“With more of this kind of data,” Dr. Witelson said, “we’ll be able to look at all sorts of comparisons, for example, comparing the brain of people who are superb at math with those who are not so good.”

“You could take someone like Wayne Gretzky, for example,” she added, “who could know not only where the puck was but where it was going to be — who was apparently seeing a fourth dimension, time — and see whether he had any special anatomical features.” (For the time being, Mr. Gretzky is still using his brain.)

So it is that Mr. Molaison, who kicked off the modern study of memory by cooperating in studies in the middle of the 20th century, may help inaugurate a new era in the 21st century. That is, as soon as Dr. Annese and his lab team finish sorting the slices they have collected.

“It’s very exciting work to talk about,” Dr. Annese said. “But to see it being done, it’s like watching the grass grow.”

Source | NY Times

The Body Electric

Monday, December 28th, 2009

Two years ago, in his book “Rocketeers,” Michael Belfiore celebrated the pioneers of the budding private space industry. Now he has returned to explore a frontier closer to home. The heroes of his new book, “The Department of Mad Scientists,” work for the Defense Advanced Research Projects Agency, better known as Darpa, a secretive arm of the United States government. And the revolution they’re leading is a merger of humans with machines.

The revolution is happening before our eyes, but we don’t recognize it, because it’s incremental. It starts with driving. Cruise control transfers regulation of your car’s speed to a computer. In some models, you can upgrade to adaptive cruise control, which monitors the surrounding traffic by radar and adjusts your speed accordingly. If you drift out of your lane, an option called lane keeping assistance gently steers you back. For extra safety, you can get extended brake assistance, which monitors traffic ahead of you, alerts you to collision threats and applies as much braking pressure as necessary.

With each delegation of power, we become more comfortable with computers driving our cars. Soon we’ll want more. An insurance analyst tells Belfiore that aging baby boomers will lead the way, enlisting robotic drivers to help them get around. For younger drivers, the problem is multi­tasking. Why put down your cellphone when you can let go of the wheel instead? Reading, texting, talking and eating in the car aren’t distractions. Driving is the distraction. Let the car do it.

That’s where Darpa comes in. Belfiore traces the agency’s origins and exploits from the 1957 Sputnik launching (which shocked the United States government into technological action) to the 1969 birth of the original Internet, known as Arpanet, to Total Information Awareness, the controversial 2002 project that was supposed to scan telecommunications data for signs of terrorism. His tone is reverential and at times breathless, but he captures the agency’s essential virtues: boldness, creativity, agility, practicality and speed.

The Army needs vehicles that can move cargo without exposing human operators to bombs or enemy fire. To encourage development of such vehicles, Darpa sponsored a 2007 contest in which cars designed by 35 teams navigated a simulated urban war zone. The cars used systems like those already in consumer vehicles: GPS, lane guidance, calibrated braking. But instead of routing their information and advice through human drivers, the cars simply acted on it.

Belfiore recounts several low-impact crashes caused by the limited ability of current software to understand complex traffic situations. But with each successive contest since Darpa’s first robot-car race, the Grand Challenge, in 2004, performance has improved. In some respects, the robot cars already surpass us. Their reaction speed is better. They can see at night, thanks to laser range-­finders. They have no blind spots. And when networked, they can read one another’s intentions.

So maybe we’ll let robots drive our cars. But would you let a robot cut you open? That’s Darpa’s next project. In minimally invasive surgery, doctors insert very thin instruments through keyhole-size incisions. This minimizes pain, blood loss, infection risk and recovery time, but it’s hard. Surgeons have to manipulate their instruments indirectly and watch them on a video monitor. They might as well use a machine. It could execute their commands, give better video feedback and hold the instruments more steadily.

More than 850 hospitals already use such operating machines. Surgeons sit across the room from patients, connected to their instruments by game-style controls and three-dimensional video binoculars. When the machines meet resistance, the surgeons feel it. The goal is to engage the doctors’ senses as fully as if the mechanical eyes and hands were theirs. In fact, they are theirs. The surgeons’ minds map, orchestrate and experience the machine like an infant taking possession of its own body.

But if sensory feedback can extend a surgeon’s body across a room, why stop there? A new version of the machine adds Ethernet, freeing the doctor to inhabit a mechanical body anywhere with a good cable or wireless connection. By digitizing surgical commands, we’ve already created transitional moments in which maneuvers have been described but not executed. Why not extend this transition, playing out the surgery in virtual reality and then editing out any errors? That’s the next step: surgery with a word processor, so to speak, instead of a typewriter.

Full Article | NY Times

Touchable Holography

Saturday, December 26th, 2009

Singularity Movie (Trailer)

Saturday, December 26th, 2009

First Robot Hand

Saturday, December 26th, 2009

ASIMOs new artificial intelligence. (ASIMO is learning!)

Saturday, December 26th, 2009

Robot Flowers

Saturday, December 26th, 2009

Controlling the TV with a wave of the hand

Saturday, December 26th, 2009

amanworkinga.jpgA TriplePoint illustration shows a man working an interactive TV screen. Touchscreens are so yesterday. Remote controls? So last century.The future is controlling your television with a simple wave of your hand. Softkinetic, a Brussels-based software company, has teamed up with another Belgian firm, Optrima, and US computer chip giant Texas Instruments to make this vision of the future a reality

Touchscreens are so yesterday. Remote controls? So last century. The future is controlling your devices with a simple wave of the hand.

A wiggle of the fingers will change television channels or turn the volume up or down. In videogames, your movements will control your onscreen digital avatar.

It’s called 3D gesture recognition and while it may not be in stores this Christmas a number of technology companies are promising that it will be by next year.

Softkinetic, a Brussels-based software company, is one of the leaders in the gesture-control field and has teamed up with US semiconductor giant Texas Instruments and others to make this touchless vision of the future a reality.

Besides TI, Softkinetic has forged partnerships with France’s Orange Vallee for interactive TV, another Belgian firm, Optrima, a maker of 3D cameras and sensors, and with Connecting Technology, a French home automation company.

“On the consumer side you have three markets — television, videogames and personal computers,” Softkinetic chief executive Michel Tombroff told AFP in a telephone interview.

“The objective is to be on the consumer market at the end of next year, by Christmas, so people can buy these things,” he said.

“In the same way that the completely changed the way that people play videogames this technology will allow us to completely transform the way people interact with television,” Tombroff said.

Roger Kay, president of Endpoint Technologies Associates, said he believes that gesture is “directionally correct because anything leading to a more natural interface for a human is better.

“We’re in that transition to a time when gestural input will be quite natural,” Kay said. “From what I’ve seen of the demos they’re pretty close.”

On the gaming front, “using a camera in real time to capture motion and then take the representative avatar from that and play it on a screen with other elements in a is a pretty compelling experience,” he said.

US software giant Microsoft demonstrated a gesture recognition program called “Project Natal” for its Xbox 360 videogame console in June and has announced plans to launch it next year.

Tombroff said Softkinetic’s gesture recognition solutions involve using a 3D camera that “looks like a little webcam” and is mounted on top of a television set

or computer monitor.

“It looks at the scene and it can analyze gestures without you holding anything in your hand or wearing any special equipment,” he said. “It’s really the ultimate gesture-based solution.

“With the Wii you need to hold something in your hand,” Tombroff said. “With this we look at your full body. You don’t need to hold anything.

“You just stand up or just move your hand,” he said. “We let you interact without any intermediate component.”

Tombroff said the technology has the capability of transforming television.

“It will become an active component of the living room,” he said. “It’s not just about sitting in the living room, turning it on and watching.

“It’s about interacting. The TV will recognize you. If you step in front of it, the camera will recognize it’s you,” Tombroff said.

“Maybe it will start with a quick recap of your email, the weather, and the traffic because it knows you need to go to the office,” he said.

“That’s the personalization,” Tombroff said. “After that it may propose interactive programs. So instead of just sitting and watching TV you’ll be able to play games or enter into programs.

“In the same way that the iPhone completely transformed the user experience as far as the phone is concerned this will transform the way people experience television,” he said.

Source | Physorg

 

How Intelligent Vehicles Will Increase the Capacity of Our Roads

Saturday, December 26th, 2009

intelligent-vehicles.gifAdaptive cruise control systems work by monitoring the road ahead using a radar or laser-based device and then use both the accelerator and brake to maintain a certain distance from the vehicle ahead. (Other variations can bring the car to a halt in the event of a potential accident).

These devices have been available on upmarket cars for ten years or more and are now becoming increasingly common. If you drive regularly on freeways, the chances are you regularly come across other vehicles being driven by these devices, especially in Europe and Japan (here, the density of traffic means that ordinary cruise control has never caught on in the way it has in the U.S.).

So how does the presence of computer-controlled vehicles affect traffic dynamics? Today, Arne Kesting and pals at the Technical University of Dresden in Germany provide an answer of sorts using a model of traffic flow in which both human and computer-driven cars share the road.

They say that the presence of computer-driven cars increases the amount of traffic that can flow on a road before jamming occurs. And the more of these cars, the greater the capacity becomes. “1 percent more [computer-controlled] vehicles will lead to an increase of the capacities by about 0.3 percent,” they say.

That’s interesting but there have been other studies suggesting that computer-controlled cars can lead to greater congestion and it’s not at all clear why Kesting and company’s analysis is superior.

Either way, the argument is probably moot. Computer-controlled cars are just the first step in what many expect to be a revolution in car travel. The big increases in traffic capacity are likely to come when cars are able to communicate with each other. This should allow entire platoons of vehicles to travel as one unit, with just a few centimetres gap between cars and the vehicle in the front communicating its intentions to all the others. Platooning should improve fuel efficiency, too.

Of course, that won’t be possible until there is a critical mass of computer-controlled cars on the roads. Even then there is a bigger hurdle to overcome of creating the legal framework in which all this can happen: imagine the insurance claims if one of these platoons were to crash.

The biggest challenge for the makers of cars that drive themselves is no longer technical but legal.

Source | Technology Review

Scientists take important step toward the proverbial fountain of youth

Saturday, December 26th, 2009

Going back for a second dessert after your holiday meal might not be the best strategy for living a long, cancer-free life say researchers from the University of Alabama at Birmingham. That’s because they’ve shown exactly how restricted calorie diets — specifically in the form of restricted glucose — help human cells live longer. This discovery, published online in The FASEB Journal could help lead to drugs and treatments that slow human aging and prevent cancer.

“Our hope is that the discovery that reduced calories extends the of normal human cells will lead to further discoveries of the causes for these effects in different cell types and facilitate the development of novel approaches to extend the lifespan of humans,” said Trygve Tollefsbol, Ph.D., a researcher involved in the work from the Center for Aging and Comprehensive Cancer Center at the University of Alabama at Birmingham. “We would also hope for these studies to lead to improved prevention of cancer as well as many other age-related diseases through controlling calorie intake of specific cell types.”

To make this discovery, Tollefsbol and colleagues used normal human lung cells and precancerous human that were at the beginning stages of cancer formation. Both sets of cells were grown in the laboratory and received either normal or reduced levels of (sugar). As the cells grew over a period of a few weeks, the researchers monitored their ability to divide, and kept track of how many cells survived over this period. They found that the normal cells lived longer, and many of the precancerous cells died, when given less . was also measured under these same conditions.

The reduced glucose caused normal cells to have a higher activity of the gene that dictates the level of telomerase, an enzyme that extends their lifespan and lower activity of a gene (p16) that slows their growth. Epigenetic effects (effects not due to gene mutations) were found to be a major cause in changing the activity of these genes as they reacted to decreased glucose levels.

“Western science is on the cusp of developing a pharmaceutical fountain of youth” said Gerald Weissmann, M.D., Editor-in-Chief of The . “This study confirms that we are on the path to persuading human cells to let us to live longer, and perhaps cancer-free, lives.”

Source | Physorg

Gallery: Looking Back at the 100 Best Innovations of 2009

Saturday, December 26th, 2009

 

X-Flex bonds so tightly, it helps walls keep their shape after blast waves. Two layers are strong enough to stop a blunt object, like a flying 2×4, from knocking down drywall. During our tests, just a single layer kept a wrecking ball from smashing through a brick wall. The wallpaper’s strength and ductility is derived from a layer of Kevlar-like material sandwiched by sheets of elastic polymer wrap. The combination works so well that the Army is now considering wallpapering bases in Iraq and Afghanistan. Civilians could soon start remodeling too—Berry Plastics plans to develop a commercial version next year.

See a video of X-Flex in action at the Best of What’s New 2009 site.

Source (the other 99) | Popular Science

Do computers understand art?

Friday, December 25th, 2009

docomputersu.jpg

This a painting of a seated woman with bent knee by Egon Schiele (1917)

A team of researchers from the University of Girona and the Max Planck Institute in Germany has shown that some mathematical algorithms provide clues about the artistic style of a painting. The composition of colours or certain aesthetic measurements can already be quantified by a computer, but machines are still far from being able to interpret art in the way that people do.

How does one place an artwork in a particular artistic period? This is the question raised by scientists from the Laboratory of Graphics and Image in the University of Girona and the Max Planck Institute for Biological Cybernetics, in Germany. The researchers have shown that certain algorithms mean a computer can be programmed to “understand” an image and differentiate between artistic styles based on low-level pictorial information. Human classification strategies, however, include medium and high-level concepts.

Low-level pictorial information encompasses aspects such as brush thickness, the type of material and the composition of the palette of colours. Medium-level information differentiates between certain objects and scenes appearing in a picture, as well as the type of painting (landscape, portrait, still life, etc.). High-level information takes into account the historical context and knowledge of the artists and artistic trends.

“It will never be possible to precisely determine mathematically an artistic period nor to measure the human response to a work of art, but we can look for trends”, Miquel Feixas, one of the authors of the study, published in the journal Computers and Graphics, tells SINC.

The researchers analysed various artificial vision algorithms used to classify art, and found that certain aesthetic measurements (calculating “the order” of the image based on analysing pixels and colour distribution), as well as the composition and diversity of the palette of colours, can be useful.

The team also worked with people with little knowledge of art, showing them more than 500 paintings done by artists from 11 artistic periods. The participants were “surprisingly good” at linking the artworks with their corresponding artistic period, showing the high capacity of human perception.

Beyond the implications for philosophy and art, the scientists want to apply their research in developing image viewing and analysis tools, classifying and searching for collections in museums, creating public informative and entertainment equipment, and in order to better understand the interactions between people, computers and works of art.

Beauty, order and complexity

The earliest work of this kind was done in 1933, when the mathematician George D. Birkhoff tried to formalise the notion of beauty with an aesthetic measurement defined as the relationship between order and complexity. After this, the philosopher Max Bense converted this into a measurement of information based on entropy (disorder or diversity).

According to Bense, the creative process is a selective process (“to create is to select”), within a range of elements (a palette of colours, sounds, phonemes, etc.). The creative process can be seen as channel for transmitting information between the palette and the artist and the objects or features of an image. This concept provides a powerful tool for analysing composition and the visual attention (“saliency”) of a painting.

Source | Physorg

Robot doppelgangers for sale

Monday, December 21st, 2009

Department store operator Sogo & Seibu has announced plans to sell two humanoid robots custom-built to look like the people who purchase them.

robot_doppleganger.jpg

The mechanical doppelgangers are available for a limited time as part of a special New Year’s promotional sale at Sogo, Seibu, and Robinson’s department stores. They will be built by Japanese robotics firm Kokoro, which is perhaps best known for its line of Actroid receptionist humanoids.

In addition to providing the robot with the owner’s face, body, hair, eyes and eyelashes, Kokoro will model the robot’s facial expressions and upper body movements after the buyer. The robot’s speech will be based on recordings of the owner’s voice.

Orders will be accepted from January 1 to 3 at any of Japan’s 28 Sogo, Seibu, or Robinson’s department stores. Only two robot twins are available, but given the hefty price tag of 20.1 million yen ($223,000) each, the stores will likely be hard-pressed to find any takers. If more than two orders are received, the purchasers will be selected in a random drawing.

Source | IT Media

Robovie-II helps with the grocery shopping

Monday, December 21st, 2009

A robot designed to help with the grocery shopping is being tested at a Kyoto-area supermarket.










The robotic assistant — an advanced version of the Robovie-II android developed by Advanced Telecommunications Research Institute International (ATR) — is the centerpiece of a networked system of robots, sensors and digital technology designed to make shopping more convenient and entertaining for the elderly. ATR is testing the experimental system at the Apita-Seikadai supermarket in Kyoto until March 2010.

To use the system, shoppers first create a shopping list at home using a special mobile device (they simply tell the robot’s on-screen avatar what they want to buy before going to the supermarket). Later, when the customer arrives at the store, sensors automatically detect the mobile device. The user’s data is wirelessly transmitted to a waiting robot, which greets the customer by name and says, “Let’s start shopping.”

In the video above, which shows part of a test conducted on December 10, the child-sized robot accompanies a 67-year-old woman while she shops for mandarin oranges and broccoli. In addition to carrying the woman’s shopping basket, the robot reminds her to get the mandarin oranges, recommends the apples (which the robot says are delicious this season), reminds her to get the broccoli, and suggests including lettuce in her salad along with the broccoli. On several occasions, the robot remarks on how delicious the items look.

When asked her impression of the system after the demonstration, the woman said she felt almost as if she were shopping with her grandchild, and she said it was fun talking with the robot.

Source | Pink Tentacle