Archive for February, 2011

Researchers aim to ‘print’ human skin

Saturday, February 19th, 2011

Each cell type is placed in a vial, rather than in cartridges. The cells are then "printed" directly on the wound.

Researchers are developing a specialized skin “printing” system that could be used in the future to treat soldiers wounded on the battlefield.

Scientists at the Wake Forest Institute for Regenerative Medicine were inspired by standard inkjet printers found in many home offices.

“We started out by taking a typical desktop inkjet cartridge. Instead of ink we use cells, which are placed in the cartridge,” said Dr. Anthony Atala, director of the institute.

The device could be used to rebuild damaged or burned skin.

The project is in pre-clinical phases and may take another five years of development before it is ready to be used on human burn victims, he said.

Other universities, including Cornell University and the Medical University of South Carolina, Charleston, are working on similar projects and will speak on the topic on Sunday at the American Association for the Advancement of Science conference in Washington. These university researchers say organs — not just skin — could be printed using similar techniques.

Burn injuries account for 5% to 20% of combat-related injuries, according to the Armed Forces Institute of Regenerative Medicine. The skin printing project is one of several projects at Wake Forest largely funded by that institute, which is a branch of the U.S. Department of Defense.

Wake Forest will receive approximately $50 million from the Defense Department over the next five years to fund projects, including the skin-creating system.

Researchers developed the skin “bio-printer” by modifying a standard store-bought printer. One modification is the addition of a three-dimensional “elevator” that builds on damaged tissue with fresh layers of healthy skin.

The skin-printing process involves several steps. First, a small piece of skin is taken from the patient. The sample is about half the size of a postage stamp, and it is taken from the patient by using a chemical solution.

Those cells are then separated and replicated on their own in a specialized environment that catalyzes this cell development.

“We expand the cells in large quantities. Once we make those new cells, the next step is to put the cells in the printer, on a cartridge, and print on the patient,” Atala said.

The printer is then placed over the wound at a distance so that it doesn’t touch the burn victim. “It’s like a flat-bed scanner that moves back and forth and put cells on you,” said Atala.

Once the new cells have been applied, they mature and form new skin.

Specially designed printer heads in the skin bio-printer use pressurized nozzles — unlike those found in traditional inkjet printers.

The pressure-based delivery system allows for a safe distance between the printer and the patient and can accommodate a variety of body types, according to a 2010 report from the Armed Forces Institute of Regenerative Medicine.

The device can fabricate healthy skin in anywhere from minutes to a few hours, depending on the size and type of burn, according to the report.

“You are building up the cells layer after layer after layer,” Atala said.

Acquiring an adequate sample can be a challenge in victims with extensive burns, he said, since there is sometimes “not enough (skin) to go around with a patient with large burns,” Atala said.

The sample biopsy would be used to grow new cells then placed in the printer cartridge, said Atala.

Researchers said it is difficult to speculate when the skin printer may be brought to the battlefield, because of the stringent regulatory steps for a project of this nature. Once the skin-printing device meets federal regulations, military officials are optimistic it will benefit the general population as well as soldiers.

“We’re not making anything military-unique,” said Terry Irgens, a program director at the U.S Army Medical Materiel Development Activity.

“We hope it will benefit both soldier and civilian,” he said.

In the meantime, researchers said they’re pleased with results of preliminary laboratory testing with the skin printer.

Atala said the researchers already have been able to make “healthy skin.”

Source | CNN

IBM’s Watson: A Hard Case

Friday, February 18th, 2011

Watson: Rooted in place

Gilbert Ryle once wrote that:

engineers stretch, twist, compress and batter bits of metal until they collapse, but it is just by such tests that they determine the strains which the metal will withstand. In somewhat the same way, philosophical arguments bring out the logical powers of the ideas under investigation, by fixing the precise forms of logical mishandling under which they refuse to work.

If that’s the work of philosophy, then Artificial Intelligence (AI) is one of philosophy’s branches. Rod Brooks, for many years director of MIT’s AI Lab, and one of AI’s great plain talkers, not to mention visionaries, defines artificial intelligence something like this: it’s when a machine does something that, if it were done by a person, we’d say it was intelligent, thoughtful, or human.

Wait a second! What does “what we would say” have to do with whether a machine is thinking?

But that’s just the point. AI is applied philosophy. AI curates opportunities for us to think about what we would say about the hard cases. At its best, AI gives us new hard cases. That’s what IBM’s, Jeopardy-winning Watson is.

But first, a real-world case: ants remove their dead from the nest and so avoid contamination. This looks like smart behavior. Now dead ants, it turns out, give off oleic acid, and experimenters have been able to demonstrate that ants will eject even live healthy ants from the nest if (thanks to meddling scientists) they have been daubed with oleic acid. What had at first appeared to be a sensitive response of the ants to the threat of harmful bacteria turns out to be a brute response triggered by the presence of a chemical.

Is the ant smart? Or stupid? Maybe neither. Or, most intriguingly of all, maybe it is both? Is there an experimentum crucis that we might perform to settle a question like this once and for all?

No. Intelligence isn’t like that. It isn’t something that happens inside the bug, or inside us. If intelligence is anything, it is an appropriate and autonomous responsiveness to the world around us. Flexible, real-time sensitivity to actual situations is what we have in mind when we talk about intelligence. And this means that intelligence is always going to be not just a matter of degree, but one of interpretation.

So back to Watson: it won! Watson produced answers to real questions, and it did so quickly and in ways that could only dimly be anticipated or understood by its designers. It beat its human opponents! This is a stunning achievement. A dazzling display of real-world, real-time responsiveness in action. Watson can think!

But hold on. Not so fast. Even if Watson is bristling and buzzing with intelligence, we can legitimately wonder whether it’s the natural intelligence of its programmers that is in evidence, rather than that of Watson.

And then there’s the issue of that little pronoun. People wonder whether it’s legitimate to talk of Watson as a He, but really the more pressing question is whether we can even speak of an It. In an important sense, there is no Watson. If Watson is a machine, then it is a machine in the way that a nuclear power plant is a machine. Watson is a system, a distributed local network. The avatar, the voice, the name — these are sleights of hand. The Watson System is staged to manipulate strings of symbols which have no meaning for it. At no point, any where in its processes, does the meaning, or context, or point of what it is doing, ever get into the act. The Watson System no more understands what’s going on around it, or what it is itself doing, than the ant understands the public health risks of decomposition. It may be a useful tool for us to deploy (for winning games on Jeopardy, or diagnosing illnesses, or whatever — mazal tov!), but it isn’t smart.

But again, we need to be slow down. Think of the ants once more. The ants do have good reasons to eject the oleic acid ants from the nest, even if they aren’t clever enough to understand that they do. Natural selection built the ants to act in accord with reasons they cannot themselves understand. And so with Watson. The IBM design team led by David Ferrucci built Watson to act as if it understood meanings that are, in fact, not available to it. And maybe that’s the upshot of what Dan Dennett has called Darwin’s dangerous idea; that’s the way, the only way, meaning and thinking gets into the world, through natural (or artificial) design. Watson is surely nothing like us, as we fantasize ourselves to be. But if Darwin and Dennett are right, we may turn out to be a lot more like Watson than we ever imagined.

Whatever we say about Dennett’s elegant and beautiful theory, you’d have to be drunk on moonshine to take seriously the idea that Watson exhibits a human-like mindfulness. And the reason is, the Watson System fails to exhibit even an animal-like mindfulness.

Remember: animals are basically plants that move. Plants are deaf, blind and dumb. Vegetables have little option but to take what comes. Animals, in contrast, can give chase, take cover, seek out both prey and mates, and hide from predators. Animals need to be perceptually sensitive to the places they happen to find themselves, and they need to make choices about what they want or need. In short, animals need to be smart.

Now here’s the rub. Watson, biologically speaking, if you get my drift, is a plant. Watson is big and it is rooted. Like all plants, it is deaf, blind, and immobile; it is basically incapable of directing action of any kind on the world around it. But now we come up against Ryle’s question as to just how much logical mishandling the concept of intelligence can tolerate. For it is right there — in the space that opens up between the animal and the world, in the situations that require of the animal that it shape and guide and organize its own actions and interactions with its surroundings — that intelligence ever enters the scene.

It’s important to appreciate that language is no work-around here. Language is just one of the techniques animals use to manage their dealings with the world around them. Giving a plant a camera won’t make it see, and giving it language won’t let it think. Which is just a way of reminding us that Watson understands no language. Unlike the ant, who acts as though it has reasons for its actions, Watson acts like a plant that talks.

Source | NPR

Meet Affetto, a Child Robot With Realistic Facial Expressions

Monday, February 14th, 2011

Hisashi Ishihara, Yuichiro Yoshikawa, and Prof. Minoru Asada of Osaka University in Japan have developed a new child robot platform called Affetto. Affetto can make realistic facial expressions so that humans can interact with it in a more natural way.

Watch:





Prof. Asada is the leader of the JST ERATO Asada Project and his team has been working on “cognitive developmental robotics,” which aims to understand the development of human intelligence through the use of robots. (Learn more about the research that led to Affetto in this interview with Prof. Asada.)

Affetto is modeled after a one- to two-year-old child and will be used to study the early stages of human social development. There have been earlier attempts to study the interaction between child robots and people and how that relates to social development, but the lack of realistic child appearance and facial expressions has hindered human-robot interaction, with caregivers not attending to the robot in a natural way.

Here are some of the expressions that Affetto can make to share its emotions with the caregiver.

The researchers presented a paper describing the development of Affetto’s head at the 28th Annual Conference of the Robotics Society of Japan last year.

The video and photo below reveal the mechatronics inside Affetto. It might be a good idea not to show this to caregivers before they meet the robot — or ever.





Source | IEEE Spectrum

China building a city for cloud computing

Saturday, February 12th, 2011

This rendering shows the planned city-sized cloud computing and office complex being built in China. (Image: IBM)

China is building a city-sized cloud computing and office complex that will include a mega data center, one of the projects fueling that country’s double-digit growth in IT spending.

The entire complex will cover some 6.2 million square feet, with the initial data center space accounting for approximately 646,000 square feet, according to IBM, which is collaborating with a Chinese company to build it.

In sheer scale, this project, first announced late last month, is nearly the size of the Pentagon, although in China’s case it is spread over multiple buildings similar to an office park and — from the rendering — may include some residential areas. But it may be a uniquely Chinese approach that brings data centers and developers together.

These big projects, whether supercomputers or sprawling software development office parks, can garner a lot of attention. But China’s overall level of IT spending, while growing rapidly, is only one-fifth that of the U.S.

According to market research firm IDC, China’s IT spending, which includes hardware, packaged software and services, is forecast to total about $112 billion this year, up 15.6% from $97 billion in 2010. By comparison, U.S. IT spending is expected to reach $564 billion this year, a 5.9% increase from 2010.

China’s IT industry isn’t that big at this point and “there is a lot of reliance on the vendors” to design data centers, said Dale Sartor, an engineer at U.S. Department of Energy’s Lawrence Berkeley National Laboratory, who visited about eight data centers in China last year.

Sartor, who leads a team of energy efficiency specialists, is on a project to “scope” out the possibility of helping the Chinese on data center energy efficiency issues, something the Energy Department has already been doing in India for several years.

Among the things Sartor is working on, in an effort that includes the China Electronics Standardization Institute, is data center standards development. He said there is a lot more regulation in China on data center design, but these regulations “haven’t to date paid a lot of attention to energy efficiency.”

Sartor expects to see accelerating data center development in China, particularly involving very large centers delivering cloud services. Large data centers may soon be the norm.

“I got a sense that the cloud is going to be huge in China for both efficiency reasons as well as the ability to control,” said Sartor. “If everything was cloud computing and the government owns it, it’s much easier to keep your finger on the Internet and other issues than [by using] a very distributed model.”

China will be using IBM‘s data design services, among other services, in the Hebei Province complex. It is working with Range Technology Development Co. on the project.

China’s rapid IT growth has been a plus for IBM, which said its growth in that country in 2010 was up 25% over the year before.

The first phase of the Hebei Province plan calls for building seven low-slung data centers. But the data center space could easily expand to more than a million square feet. The plan calls for another six data center buildings, three on either side of the initial seven, if they’re needed. The center is expected to be completed in 2016, IBM said.

In terms of size, the data centers will be among the world’s biggest. The largest known data center complex is a 1.1-million-square-foot facility in Chicago owned by Digital Realty Trust, according to Data Center Knowledge, which has ranked the data centers by size.

Source | Computer World

Augmented reality, machine learning and the cleric

Saturday, February 12th, 2011

An augmented reality app for the Apple iPhone from the Museum of London lays historic images over London landmarks Photo: Museum of London

When Presbyterian minister and mathematician Thomas Bayes put quill to paper in the 18th century, little could he know that one day his equations would help meld the virtual and physical world.

But more than 200 years after Bayes’ death, Mike Lynch, CEO of Europe’s second largest software company, Autonomy, is arguing that machine-learning software built on Bayes’ theorem on probabilistic relationships will underpin the next major shift in computing – the move to augmented reality (AR).

“One of the biggest areas [of computing] is going to be in the area of augmented reality – it takes the online world and slaps it right in the middle of the real world,” Lynch said, speaking to silicon.com at the recent Intellect Annual Regent Conference 2011.

Today, augmented reality apps run on smartphones, layering digital information over video of the real world taken by the phone camera in real time. But in future, the apps could lay digital information directly over everything we see, using screens built into glasses or contact lenses.

Smartphone apps already exist that do things such as lay historic photos over images of London landmarks, but Lynch said AR will eventually permeate our lives – putting the digital world at the heart of everyday interactions.

“Perhaps a printed poster on the wall becomes animated, and you can click on it and buy the DVD – suddenly what was a simple ad becomes a way you can buy something,” he said.

“Or you’re walking around London and you hold up your phone to a statue of Eros and it tells you the history of it.

“Or you meet someone on the street, hold up your phone and it tells you about what they’re interested in, and maybe in the virtual world they also have a parrot sitting on their shoulder.

“It’s a completely different way of interacting with vast amounts of information in situ and in context.”

Machine learning

Many AR apps available today rely on GPS and digital compasses to work out what the phone is pointing at and what information to display, but future AR apps will

increasingly need to understand what the user is looking at and what digital information they want to see, a process that will require machine learning.

“Everything we are talking about comes down to the ability of the computer to understand what something means,” Lynch said.

“It’s the mathematics of Thomas Bayes that allows computers to learn what things mean.

“It’s a self-learning system – so basically by reading the newspapers [a machine] learns all about our world. For example, a computer could learn that ‘Becks’ is David Beckham, and that he’s married to ‘Posh’, that he’s very good at football and a bit of a fashion icon.”

Drowning in data

Lynch’s vision of the near future is a nice fit for Autonomy and its specialism in machine-learning and pattern-recognition software that can analyse unstructured data – information that has not been labelled and linked to other information inside a database, where it can be read and understood by a machine.

Since Autonomy was founded in Cambridge in 1996, the company has been helping businesses tackle the tide of unstructured information that flows into a modern business.

Today, Autonomy has a market capitalisation of $7bn and a customer list that includes more than 20,000 major organisations worldwide – including BAE Systems, the BBC, GlaxoSmithKline, Nasa and the Houses of Parliament.

The amount of unstructured information – whether it is text in an email or an audio recording of a phone call – is growing so quickly that Lynch believes organisations will soon have no choice but to task machines with analytical work that previously would have been the preserve of humans.

“Some 85 per cent of what you deal with at work is unstructured information,” Lynch said.

“You can replace people in lots of tasks where people are looking at unstructured information – for example, reading an email and routing it to someone else, looking at security camera footage or going through documents to find which are relevant to a law suit.

“If you can get a computer to do [those tasks] then that’s a phenomenal saving, and it frees up the human to do something more interesting.

“It’s going to have to be that way because the amount of unstructured information is growing at 67 per cent [each year] – so if you are going to use people you better get breeding.”

Perhaps in a nod to the rise of AR, Lynch said the most valuable lesson he had learnt since starting Autonomy was that the tech industry is built on shifting sands.

“We always think everything is set in stone and this is how it is. For example, Microsoft dominates the industry. The one thing you learn is nothing is set in stone, all the stones are moving, there’s incredible opportunity all over the place and the fat lady has not sung,” he said.

But even as technology accelerates the pace of change, and the digital world becomes intertwined with the physical, Lynch takes comfort that, AR future or not, some things will never change.

“I live in Suffolk and the nice thing about Suffolk is that the conversation down the pub is the same as it has been for the last 500 years, which is ‘How do you get rid of moles?’,” he said.

And as confident as he is in taming the world’s information, Lynch admits this is one challenge that has got him beat, conceding: “It probably will always be an unsolvable problem.”

Source | Silicon

Brain ‘network maps’ reveal clue to mental decline in old age

Saturday, February 12th, 2011

The human brain operates as a highly interconnected small-world network, not as a collection of discrete regions as previously believed, with important implications for why many of us experience cognitive declines in old age, a new study shows.

Using graph theory, Australian researchers have mapped the brain’s neural networks and for the first time linked them with specific cognitive functions, such as information processing and language. Results from the study are published in the prestigious Journal of Neuroscience.

The researchers from the University of New South Wales are now examining what factors may influence the efficiency of these networks in the hope they can be manipulated to reduce age-related decline.

“While particular brain regions are important for specific functions, the capacity of information flow within and between regions is also crucial,” said study leader Scientia Professor Perminder Sachdev from UNSW’s School of Psychiatry.

“We all know what happens when road or phone networks get clogged or interrupted. It’s much the same in the brain.

“With age, the brain network deteriorates and this leads to slowing of the speed of information processing, which has the potential to impact on other cognitive functions.”

The advent of new MRI technology and increased computational power had allowed the development of the neural maps, resulting in a paradigm shift in the way scientists view the brain, Professor Sachdev said.

“In the past when people looked at the brain they focused on the grey matter in specific regions because they thought that was where the activity was. White matter was the poor cousin. But white matter is what connects one brain region to another and without the connections grey matter is useless,” he said.

In the study, the researchers performed magnetic resonance imaging (MRI) scans on 342 healthy individuals aged 72 to 92, using a new imaging technique called diffusion tensor imaging (DTI).

Using a mathematical technique called graph theory, they plotted and measured the properties of the neural connectivity they observed.

“We found that the efficiency of the whole brain network of cortical fibre connections had an influence on processing speed, visuospatial function – the ability to navigate in space – and executive function,” said study first author Dr Wei Wen.

“In particular greater processing speed was significantly correlated with better connectivity of nearly all the cortical regions of the brain.”

Professor Sachdev said the findings help explain how cognitive functions are organised in the brain, and the more highly distributed nature of some functions over others.

“We are now examining the factors that affect age-related changes in brain network efficiency – whether they are genetic or environmental – with the hope that we can influence them to reduce age-related decline,” Professor Sachdev said.

“We know the brain is not immutable; that if we work on the plasticity in these networks we may be able to improve the efficiency of the connections and therefore cognitive functions.”

Source | University of New South Wales

‘Smartest Machine on Earth’ on NOVA

Saturday, February 12th, 2011

“Jeopardy!” challenges even the best human minds. Can a computer win at “Jeopardy!”? Aired February 9, 2011 on PBS.


Watch the full episode. See more NOVA.


JPEG for the mind: How the brain compresses visual information

Saturday, February 12th, 2011

Most of us are familiar with the idea of image compression in computers. File extensions like “.jpg” or “.png” signify that millions of pixel values have been compressed into a more efficient format, reducing file size by a factor of 10 or more with little or no apparent change in image quality. The full set of original pixel values would occupy too much space in computer memory and take too long to transmit across networks.

The brain is faced with a similar problem. The images captured by light-sensitive in the retina are on the order of a megapixel. The brain does not have the transmission or to deal with a lifetime of megapixel images. Instead, the brain must select out only the most vital information for understanding the visual world.

In today’s online issue of , a Johns Hopkins team led by neuroscientists Ed Connor and Kechen Zhang describes what appears to be the next step in understanding how the brain compresses down to the essentials.

They found that cells in area “V4,” a midlevel stage in the primate brain’s object vision pathway, are highly selective for image regions containing acute curvature. Experiments by doctoral student Eric Carlson showed that V4 cells are very responsive to sharply curved or angled edges, and much less responsive to flat edges or shallow curves.

To understand how selectivity for acute curvature might help with compression of visual information, co-author Russell Rasquinha (now at University of Toronto) created a computer model of hundreds of V4-like cells, training them on thousands of natural object images. After training, each image evoked responses from a large proportion of the virtual V4 cells — the opposite of a compressed format. And, somewhat surprisingly, these virtual V4 cells responded mostly to flat edges and shallow curvatures, just the opposite of what was observed for real V4 cells.

The results were quite different when the model was trained to limit the number of virtual V4 cells responding to each image. As this limit on responsive cells was tightened, the selectivity of the cells shifted from shallow to acute curvature. The tightest limit produced an eight-fold decrease in the number of cells responding to each image, comparable to the file size reduction achieved by compressing photographs into the .jpeg format. At this level, the computer model produced the same strong bias toward high curvature observed in the real V4 cells.

Why would focusing on acute curvature regions produce such savings? Because, as the group’s analyses showed, high-curvature regions are relatively rare in natural objects, compared to flat and shallow curvature. Responding to rare features rather than common features is automatically economical.

Despite the fact that they are relatively rare, high-curvature regions are very useful for distinguishing and recognizing objects, said Connor, a professor in the Solomon H. Snyder Department of Neuroscience in the School of Medicine, and director of the Zanvyl Krieger Mind/Brain Institute.

“Psychological experiments have shown that subjects can still recognize line drawings of objects when flat edges are erased. But erasing angles and other regions of high curvature makes recognition difficult,” he explained

Brain mechanisms such as the V4 coding scheme described by Connor and colleagues help explain why we are all visual geniuses.

“Computers can beat us at math and chess,” said Connor, “but they can’t match our ability to distinguish, recognize, understand, remember, and manipulate the objects that make up our world.” This core human ability depends in part on condensing visual information to a tractable level. For now, at least, the .brain format seems to be the best compression algorithm around.

Source | Physorg