Archive for March, 2011

A brain–computer interface allows paralysed patients to play music with brainpower alone

Saturday, March 26th, 2011

The brain-computer interface allows paralysed patients to play music just by thinking about it.ICCMR Research Team – University of Plymouth

A pianist plays a series of notes, and the woman echoes them on a computerized music system. The woman then goes on to play a simple improvised melody over a looped backing track. It doesn’t sound like much of a musical challenge — except that the woman is paralysed after a stroke, and can make only eye, facial and slight head movements. She is making the music purely by thinking.

This is a trial of a computer-music system that interacts directly with the user’s brain, by picking up the tiny electrical impulses of neurons. The device, developed by composer and computer-music specialist Eduardo Miranda of the University of Plymouth, UK, working with computer scientists at the University of Essex, should eventually help people with severe physical disabilities, caused by brain or spinal-cord injuries, for example, to make music for recreational or therapeutic purposes. The findings are published online in the journal Music and Medicine1.

“This is an interesting avenue, and might be very useful for patients,” says Rainer Goebel, a neuroscientist at Maastricht University in the Netherlands who works on brain-computer interfacing.

Therapeutic use

Evidence suggests that musical participation can be beneficial for people with neurodegenerative diseases such as dementia and Parkinson’s disease. But people who have almost no muscle movement have generally been excluded from such benefits, and can enjoy music only through passive listening.

The development of brain–computer interfaces (BCIs) that enable users to control computer functions by mind alone offer new possibilities for such people (see Mental ping-pong could aid paraplegics). In general, these interfaces rely on the user’s ability to learn how to self-induce particular mental states that can be detected by brain-scanning technologies.

Miranda and his colleagues have used one of the oldest of these systems: electroencephalography (EEG), in which electrodes on the skull pick up faint neural signals. The EEG signal can be processed quickly, allowing fast response times, and the instrument is cheaper and more portable than brain-scanning techniques such as magnetic resonance imaging and positron-emission tomography.

Previous efforts using BCIs have focused on moving computer screen icons such as cursors, but Miranda’s team sought to achieve the much more complex task of enabling users to play and compose music. Miranda says that he first became aware of the then-emerging field of BCIs more than a decade ago while researching how to make music using brainwaves. “When I realized the potential of a musical BCI for the wellbeing of severely disabled people,” he says, “I couldn’t leave the idea alone. Now I can’t separate this work from my activities as a composer.”

The trick is to teach the user how to associate particular brain signals with specific tasks by presenting a repeating stimulus — auditory, visual or tactile — and getting the user to focus on it. This elicits a distinctive, detectable pattern in the EEG signal. Miranda and his colleagues show several flashing ‘buttons’ on a computer screen, which each trigger a musical event. The users push a button just by directing their attention to it.

For example, a button could be used to generate a melody from a preselected set of notes. The user can alter the intensity of the control signal – how ‘hard’ the button is pressed – by varying the intensity of attention, and the result is fed back to them visually as a change in the button’s size. In this way, any one of several notes can be selected by mentally altering the intensity of pressing.

With a little practice, this allows users to create a melody as if they were selecting keys on a piano. And, as with learning an instrument, say the researchers, “the more one practices the better one becomes”.

Back in control

The researchers trialled their system on a female patient who has locked-in syndrome, a form of almost total paralysis caused by brain lesions, at the Royal Hospital for Neuro-disability in London. During a two-hour session, she got the hang of the system and was eventually playing along with a backing track. She reported that “it was great to be in control again”.

Goebel points out that the patients still need to be able to control their eye movements, which people with total locked-in syndrome cannot. In such partial cases, he says, “one can usually use gaze directly for controlling devices, instead of an EEG system”. But Miranda points out that eye-gazing alone does not permit variations in the intensity of the signal. “Eye gazing is comparable to a mouse or joystick,” he says. “Our system adds another dimension, which is the intensity of the choice. That’s crucial for our musical system.”

Miranda says that although increasing the complexity of the musical tasks is not a priority, music therapists have suggested it would be better if the system were more like a musical instrument — for instance, with an interface that looks like a piano keyboard. He admits that it is not easy to raise the number of buttons or keys beyond four, but is confident that “we will get there eventually”.

“The flashing thing does not need to be on a computer screen,” he says. It could, for example, be a physical electronic keyboard with light-emitting diodes on the keys. “You could play it by staring at the keys,” he says.

Source | Nature

Charlie Rose interview with Ray Kurzweil and director Barry Ptolemy now online

Saturday, March 26th, 2011

Ray Kurzweil

Ray Kurzweil and Barry Ptolemy appeared on the Charlie Rose show Friday night to discuss the movie Transcendent Man, directed by Barry Ptolemy. You can see the interview here. You can also watch “In Charlie’s Green Room with Ray Kurzweil,” recorded the same evening.

Transcendent Man by Barry Ptolemy focuses on the life and ideas of Ray Kurzweil. It is currently available on iTunes in the United States and Canada and on DVD. Tickets to the London and San Francisco screenings in April are available.

Surveillance robots know when to hide

Saturday, March 26th, 2011

The creation of robots that can hide from humans while spying on them brings autonomous spy machines one step closer

THE spy approaches the target building under cover of darkness, taking a zigzag path to avoid well-lit areas and sentries. He selects a handy vantage point next to a dumpster, taking cover behind it when he hears the footsteps of an unseen guard. Once the coast is clear, he is on the move again – trundling along on four small wheels.

This is no human spy but a machine, a prototype in the emerging field of covert robotics. It was being put through its paces at a demonstration late last year by Lockheed Martin’s Advanced Technology Laboratories at Cherry Hill, New Jersey. With an aerial drone to their credit (see “Unseen watcher in the sky”), the company now wants to design autonomous robots that can operate around humans without being detected.

What makes the robot special is its ability to build a computer model of its surroundings, incorporating information on lines of sight. The robot is fitted with a laser scanner to allow it to covertly map its environment in 3D. It also has a set of acoustic sensors which it uses to distinguish nearby footsteps and their direction.

Lead engineer Brian Satterfield says the robot was designed to operate within four constraints: “Avoiding visible detection by sentries of known locations, avoiding potential detection by sentries whose positions were unknown, avoiding areas in which the robot would have no means of escape, and, as this robot was designed to run at night, avoiding areas that were well lit.” To make it hard to spot in the dark, the robot was painted black.

If the robot believes it is in danger of being detected by an approaching sentry, it will try to get to a place where it can hide, Satterfield says. His comment is an example of how natural it is for us to talk about such robots as if they understand how they are perceived and have a “theory of mind”Movie Camera.

“Lockheed Martin’s approach does include a sort of basic theory of mind, in the sense that the robot makes assumptions about how to act covertly in the presence of humans,” says Alan Wagner of the Georgia Institute of Technology in Atlanta, who works on artificial intelligence and robot deception.

But the level at which the robot’s software operates is probably limited to task-specific instructions such as, “if you hear a noise, scurry to the nearest dark corner”, he says. That’s not sophisticated enough to hide from humans in varied environments.

“Significant AI will be needed to develop a robot which can act covertly in a general setting,” Wagner says. “The robot will need to consider its own shape and size, to have the ability to navigate potential paths, [to be aware of] each person’s individual line of view, the impact that its movement will have on the environment, and so on.”

Satterfield’s robot was built with off-the-shelf components. Both he and Wagner say that specialised hardware which is more compact and quieter will improve future robots’ mobility and their ability to stay hidden. “There are very few fundamental limits that would prevent robots from eventually conducting extended covert missions and evading detection by humans,” Satterfield says.

Lockheed Martin’s work looks ready to emerge, albeit quietly, into the real world. The US army recently solicited proposals for a “persistent surveillance” robot with concealment capabilities and suited for extended deployments. Later this year, the US Department of Defense is expected to back that up with cash awards for working designs.

Source | New Scientist

I Took the Turing Test

Saturday, March 26th, 2011

In his landmark 1950 paper “Computing Machinery and Intelligence,” the mathematician, philosopher and code breaker Alan Turing proposed a method for answering the question “Can machines think?”: an “imitation game” in which an “interrogator,” C, interviews two players, A and B, via teleprinter, then decides on the basis of the exchange which is human and which is a computer.

Turing’s radical premise was that the question “Can a machine win the imitation game?” could replace the question “Can machines think?” — an upsetting idea at the time, as the neurosurgeon Sir Geoffrey Jefferson asserted in 1949: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain — that is, not only write it but know that it had written it.” Turing demurred: if the only way to be certain that a machine is thinking “is to be the machine and to feel oneself thinking,” wouldn’t it follow that “the only way to know that a man thinks is to be that particular man”? Nor was the imitation game, for Turing, a mere thought experiment. On the contrary, he predicted that in 50 years, “it will be possible to program computers . . . to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”

Well, he was almost right, as Brian Christian explains in “The Most Human Human,” his illuminating book about the Turing test. In 2008, a computer program called Elbot came just one vote shy of breaking Turing’s 30 percent silicon ceiling. The occasion was the annual Loebner Prize Competition, at which programs called “chatterbots” or “chatbots” face off against human “confederates” in scrupulous enactments of the imitation game. The winning chatbot is awarded the title “Most Human Computer,” while the confederate who elicits “the greatest number of votes and greatest confidence from the judges” is awarded the title “Most Human Human.”

It was this title that Christian — a poet with degrees in computer science and philosophy — set out, in 2009, to win. And he was not about to go “head-to-head (head-to-motherboard?) against the top A.I. programs,” he writes, without first getting, as it were, in peak condition. After all, for Elbot to have fooled the judges almost 30 percent of the time into believing that it was human, its rivals had to have failed almost 30 percent of the time to persuade the judges that they were human. To earn the “Most Human Human” title, Christian realized, he would have to figure out not just why Elbot won, but why humanity lost.

His quest is, more or less, the subject of “The Most Human Human,” an irreverent picaresque that follows its hero from the recondite arena of the “Nicomachean Ethics” to the even more recondite arena of legal deposition to perhaps the most recondite arena of all, that of speed dating — and on beyond zebra. What Christian learns along the way is that if machines win the imitation game as often as they do, it’s not because they’re getting better at acting human; it’s because we’re getting worse.

Take, for example, the loathsome infinite regress of telephone customer service. You pummel your way through a blockade of menu options only to find that the live operator, once you reach her, talks exactly like the automated voice you’re trying to escape. And why is this? Because, Christian discovers, that’s how operators are trained to talk. Nor is this emulation of the electronic limited to the commercial realm. In chess, he notes, the “victory” of the computer program Deep Blue over Garry Kasparov had the paradoxical effect of convincing a whole generation of young chess players that the route to a grandmaster title was through rote memorization of famous matches. Whereas in the past these chess players might have dreamed of growing up to be Kasparov, master of strategy, now they dream of growing up to be Deep Blue, master of memory.

So how do you win the imitation game? “Just be yourself,” a past confederate advises Christian. But what does it mean to “be yourself”? In pursuing the question, Christian finds his way to Nietzsche, who “held the startling opinion that the most important part of ‘being oneself’ was — in the Brown University philosopher Bernard Reginster’s words — ‘being one self, any self.’ ” Which, as it turns out, is immensely challenging for computers. For instance, to circumvent the difficulty, the program known as Cleverbot “crowdsources” selfhood, borrowing intelligence from the humans who visit its Web site; it’s from this “conversational purée” that it draws its remarks and retorts, thereby generating the illusion of what Christian calls “coherence of identity.” But while Cleverbot can speak persuasively about “the things to which there is a right answer independent of the speaker,” if you ask it where it lives, “you get a pastiche of thousands of people talking about thousands of places.” What you realize, in other words, isn’t that “you aren’t talking with a human,” but that “you aren’t talking with a human.”

And that’s precisely the difficulty. In a wiki-age that privileges the collective over the personal, Christian suggests, we have become tone deaf to the difference between the human voice and the chatbot voice. Nor is the effect limited to the Loebner Prize. From smartphones whose predictive-text algorithms auto-correct the originality out of our language (“the more helpful our phones get, the harder it is to be ourselves”) to “super-automatic” espresso machines that sidestep the nuanced maneuvers of the human barista, technology militates against Ford Madox Ford’s “personal note,” Nietzsche’s “single taste”: against selfhood itself.

Christian is at his best when he is at his most hortatory. “Cobbled-together bits of human interaction do not a human relationship make,” he inveighs early on. “Not 50 one-night stands, not 50 speed dates, not 50 transfers through the bureaucratic pachinko. No more than sapling tied to sapling, oak though they may be, makes an oak. Fragmentary humanity isn’t humanity.” And later: “For everyone out there fighting to write idiosyncratic, high-entropy, unpredictable, unruly text, swimming upstream of spell-check and predictive auto-completion: Don’t let them banalize you.”

As “The Most Human Human” demonstrates, Christian has taken his own words to heart. An authentic son of Frost, he learns by going where he has to go, and in doing so proves that both he and his book deserve their title.

David Leavitt’s books include “The Man Who Knew Too Much: Alan Turing and the Invention of the Computer” and “The Indian Clerk.”

Source | New York Times

Turing machine built from wood and scrap metal

Saturday, March 26th, 2011

A mechanical machine that can solve the same algorithms as a modern computer has now been built out of wood and scrap metal. Created by software engineer Jim MacArthur it works by using levers and cams and only requires electricity to power a small motor (see video above).

The machine is a close physical model of the theoretical Turing machine – a device first described by Alan Turing in 1937 as a thought experiment to understand the limits of mechanical computation. According to the theory, the machine performs calculations using a set of rules to manipulate symbols on an infinite strip of tape.

Instead of using tape, this machine’s memory uses ball bearings placed on a steel grid. A ball can represent one of five different symbols based on its position on the grid. The machine reads and writes data by repositioning the balls into different cells. It does this by moving along the grid, lifting ball bearings with magnets and then depositing them into a new position based on a set of rules.

A true Turing machine requires an infinite track or tape to run on but according to MacArthur, his machine is as close as you can get to a physical replica. It has no practical computing applications and would take months to add a few numbers together but MacArthur says it was fun to build. “Since you can see this computer working, it could be useful for educational purposes,” he says.

His machine was showcased earlier this month at Maker Faire UK in Newcastle.

If you enjoyed this video, you might also like to see the world’s oldest computer recreated from Lego or a reconstruction of the computer that broke German code during the second world war.

Source | New Scientist

Optical imaging method shows brain multiplexing

Saturday, March 26th, 2011

Visualization of how the primary visual cortex encodes both orientation and retinotopic motion of a visual object simultaneously

Researchers have developed a real-time optical imaging method that exploits a specific voltage-sensitive dye to demonstrate brain multiplexing in the visual cortex, says Dr. Dirk Jancke, neuroscientist at the Ruhr-University in Bochum, Germany.

Neurons synchronize with different partners at different frequencies. Optical imaging allows fine grained resolution of cortical pattern activity maps in which local groups of active nerve cells represent grating orientation. A particular grating orientation activates different groups of nerve cells resulting in unique patchy patterns.

The researchers used simple oriented gratings with alternating black-and-white stripes drifting at constant speed across a monitor screen. They detected brain activity that signals both the grating’s orientation and its motion simultaneously.

They used a voltage-sensitive dye that changed fluorescence whenever nerve cells receive or send electrical signals. High resolution camera systems simultaneously captured the activities of millions of nerve cells across several square millimeters of a brain.

The study showed that motion direction and speed can be estimated independently from orientation maps.  This resolves ambiguities occurring in visual scenes of everyday life, and starts to show how the brain handles complex data to create a stable perception at a given moment of time, says Jancke.

Ref.: “Independent encoding of grating motion across stationary feature maps in primary visual cortex visualized with voltage-sensitive dye imaging,” Neuroimage, January 4, 2011.

Fruit Flies Could Hold Key to Future Internet

Saturday, March 26th, 2011

When I watched IBM Corp. (NYSE: IBM)’s Watson on Jeopardy a few weeks ago, it occurred to me that part of why the drubbing was possible was the computing power created by thousands of processors working in parallel. Exciting news, since advances in parallel processing will allow for more intelligent routing on the Internet.

Recently, I stumbled on some research related to this. Apparently, getting multiple processors to play nice is a work in progress. And developing truly effective rules for the road (in the form of distributed computing algorithms) remains a major challenge for computer scientists.

That’s certainly understandable. What I wasn’t prepared for was where scientists are looking for answers: fruit flies. Yep, fruit flies.

Not being a microbiologist or computer scientist, I thought it best to ask someone who’s both about what the heck’s going on.

Fortunately for me, I found Dr. Ziv Bar-Joseph, associate professor at Carnegie Mellon and lead author of the research paper. Here’s what he had to say:

Kassner: Your research is focused on determining optimal communications paths in digital environments such as multiprocessor arrays. What’s with fruit flies?

Bar-Joseph: Network applications rely on organizing nodes to determine routing and how to control processors. One method uses a Maximal Independent Set [MIS], a technique that identifies a subset of computers that together connect to every other node in the network and provide structure.

Determining how to select a MIS is difficult and has been under scrutiny for many years. It turns out that fruit flies solve a similar problem. During brain development, a process called Sensory Organ Precursor [SOP] selection occurs.

As in computer networks, some cells (SOP) in the brain will become local leaders (MIS) and convey information from the environment to neighboring cells.

It’s true, methods for selecting an MIS in computer networks exist. But, until our work with flies, selecting the MIS could not be solved without knowing how many neighbors each network node has. Since flies solve the problem without relying on such knowledge, determining how they do it becomes an important question. The answer could lead to robust and efficient computational methods.

Kassner: I have to ask, what made you look at fruit flies in the first place?

Bar-Joseph: It was an accident. While visiting [co-author] Naama Barkai, I met one of her students. His project happened to involve the sensory-nerve cells of fruit flies, specifically how cell-fate determination (very experimental) is used to organize the cells into an efficient parallel-processing sensor array.

I felt something similar could solve the MIS problem. After further discussion, it became clear. The fruit-fly cells are doing something unique.

Kassner: I’ve heard about Carnegie Mellon’s FireFly Sensor Network. Is this something that might benefit from your research?

Bar-Joseph: Yes, sensor networks would benefit from our new algorithm. For example, ad-hoc networks consisting of relatively cheap sensing devices are used to monitor environmental hazards.

When activated on site, they need to establish a network hierarchy — exactly what our algorithm does. Another concern is saving energy. Our fruit-fly-derived algorithm is more efficient than any known method. And, hopefully, it will become the method of choice for sensor-network applications.

Kassner: I’m fascinated by what appears to be a convergence between biology and computing. Do you have any thoughts about that? Are you currently looking at any other relationships?

Bar-Joseph: Computing has relied on ideas from biology for a long time, but in a limited way. We now know more, allowing us to develop computational methods not possible before.

Biological systems address many challenges presented by computer networking. For instance, biological processes are often distributed, as are communication systems used by computers. Thus, I believe, solutions for many computer-network problems can be based on what we learn from biological systems.

For example, we know that cells provide fault tolerance in a clever way. Since handling failures is an issue in computer networks, understanding the mechanism could help create better algorithms for handling fault tolerances.

While biology doesn’t necessarily try to find the optimal solution every time, solutions it does come up with (at least the ones that survive) are often robust and adaptable. Currently, this is something that is lacking in computer systems. If we can improve computer technology based on insights from biology, I think it would be great.

Kassner: Thanks, Dr. Bar-Joseph — that’s some amazing stuff. Now… what do you know about mosquitoes?

— Michael Kassner is a writer and consultant specializing in information security.

Source | Internet Evolution

New Film on Alan Turing

Saturday, March 26th, 2011

An international production team has created a teaser for a feature-length drama documentary on Alan Turing, says biographer Dr. Andrew Hodges.

Turing was the British WW II code breaker and early pioneer of computer science and artificial intelligence who proposed an operational test of intelligence as a replacement for the philosophical question, “Can machines think?”

Historians believe that his WW2 code breaking work helped save millions of lives and shortened the war by two years. He founded three new scientific fields: computer science, artificial intelligence, and morphogenesis.

Funding is currently being raised for the film, with a goal  for completion in mid-2012, to coincide with the centennial of Turing’s birth.

The dangers of ‘e-personality’

Monday, March 14th, 2011

Excessive use of the Internet, cell phones, and other technologies can cause us to become more impatient, impulsive, forgetful and narcissistic according to a new book on “e-personality,” says psychiatrist Elias Aboujaoude, MD, clinical associate professor of psychiatry and behavioral sciences and director of Stanford University’s impulse control and obsessive-compulsive disorder clinics, in a new book, Virtually You: The Dangerous Powers of the E-Personality.

Drawing from his clinical work and personal experience, he discusses the Internet’s psychological impact and how our online traits are unconsciously being imported into our our offline lives.

Source | Kurzweil AI

Predicting future appearance

Monday, March 14th, 2011

A computer program that ages photographic images of people’s faces has been developed by Concordia University’s Department of Computer Science and Software Engineering.

Most face-aged images are currently rendered by forensic artists. Although these artists are trained in the anatomy and geometry of faces, they rely on art rather than science.

“We pioneered a novel technique that combines two previous approaches, known as active appearance models (AAMs) and support vector regression (SVR),” says Khoa Luu, a PhD candidate.

Luu used a combination of AAMs and SVR methods to interpret faces and to “teach” aging rules to the computer. Then, he input information from a database of facial characteristics of siblings and parents taken over an extended period. Using this data, the computer can predict an individual’s facial appearance at a future period.

According to Luu, this technology could serve as a new tool in missing-child investigations and matters of national security — an advance that could help to identify missing kids and criminals on the lam.

Luu’s work appears in the volume series Lecture Notes in Computer Science.

Source | Concordia University

A Search Engine for the Human Body

Monday, March 14th, 2011

Inside out: A close up of a CT processed by new software from Microsoft.

A new search tool developed by researchers at Microsoft indexes medical images of the human body, rather than the Web. On  CT scans, it automatically finds organs and other structures, to help doctors navigate in and work with 3-D medical imagery.

CT scans use X-rays to capture many slices through the body that can be combined to create a 3-D representation. This is a powerful tool for diagnosis, but it’s far from easy to navigate, says Antonio Criminisi, who leads a group at Microsoft Research Cambridge, U.K., that is attempting to change that. “It is very difficult even for someone very trained to get to the place they need to be to examine the source of a problem,” he says.

When a scan is loaded into Criminisi’s software, the program indexes the data and lists the organs it finds at the side of the screen, creating a table of hyperlinks for the body. A user can click on, say, the word “heart” and be presented with a clear view of the organ without having to navigate through the imagery manually.

Once an organ of interest has been found, a 2-D and an enhanced 3-D view of structures in the area are shown to the user, who can navigate by touching the screen on which the images are shown. A new scan can also be automatically and precisely matched up alongside a past one from the same patient, making it easy to see how a condition has progressed or regressed.

Criminisi’s software uses the pattern of light and dark in the scan to identify particular structures; it was developed by training machine-learning algorithms to recognize features in hundreds of scans in which experts had marked the major organs. Indexing a new scan takes only a couple of seconds, says Criminisi. The system was developed in collaboration with doctors at Addenbrookes Hospital in Cambridge, U.K.

The Microsoft research group is exploring the use of gestures and voice to control the system. They can plug in the Kinect controller, ordinarily used by gamers to control an Xbox with body movements, so that surgeons can refer to imagery in mid-surgery without compromising their sterile gloves by touching a keyboard, mouse, or screen.

Kenji Suzuki an assistant professor at the University of Chicago, whose research group works on similar tools, says the Microsoft software has the potential to improve patient care, providing it really does make scans easier to navigate. “As medical imaging has advanced, so many images are produced that there is a kind of information overload,” he explains. “The workload has grown a lot.”

Suzuki says Microsoft’s approach is a good one, but that medical professionals might be more receptive to the design if it indexed signs of disease, not just organs. His own research group has developed software capable of recognizing potentially cancerous lung nodules; in trials, it made half as many mistakes as a human expert.

Criminisi sticks by the notion of using organs as a kind of navigation system but says that disease-spotting capability is also under development. He says, “We are working to train it to detect differences between different grades of glioma tumor”—a type of brain tumor.

The Microsoft group also intends the tool to be used at large scales. It could automatically index a collection of 3-D scans or other images, making possible new ways of tracking medical records, says Criminisi. Today, records are kept as text that describes scans and other information. A search tool that finds the word “heart”, for example, would not know if that meant it appeared in a scan or was mentioned in another context. If a hospital’s computer system indexed new scans, the Microsoft software could automatically record what was imaged in a person’s records and when.

Source | Technology Review

World’s First Eye-controlled Laptop Presented At CeBIT 2011

Sunday, March 6th, 2011

Computer manufacturer Lenovo has partnered with Swedish startup Tobii Technology to launch the world’s first eye controlled laptop which will be on display as from today at CeBIT in Hannover.

The prototype is a fully functional model and according to the manufacturer provides with a more intuitive interface as it relies on the human eyes to point, select and scroll and complements, rather than replace, existing control interfaces.

Henrik Eskilsson, CEO of Tobii Technology, says that it is only a matter of years before the technology becomes an integral part of the average computer as the tracking technology is mature enough and only needs to be miniaturised and mass produced to cut down on price.

Only 20 eye controlled laptops have been produced for demonstration and development purposes; one of the more obvious applications of the laptop would be help people with special needs.

Others include the capability to zoom pictures or maps and automatically centre on the area you wish to look at; glance at an icon or widget to bring up more information.

In addition, the screen’s brightness can be auto dimmed and brightened as it recognised the user’s eyes, in order to save power.

Tobii’s technology relies on 13 patent families that cover aspects such as sensor technology, illumination methods, data transfer mechanisms and eye control interaction techniques.

Amongst them is what it calls a physiological 3D model of each individual’s eyes which it calls TrueEye, something that could be used for biometric applications.

Barbara Barclay, general manager of Tobii North America, hinted at future collaborations saying that “what we find most exciting are the opportunities that eye control as part of multi-modal interfaces offer consumer electronics manufacturers in a range of product categories”.

These could include more intuitive user interfaces for smartphones and mobile devices as well as gaming applications (integration with accessories like the Kinect for example).

Source | ITProPortal

Do-It-Yourself Health Care With Smartphones

Sunday, March 6th, 2011

SINGAPORE — For more and more people, computers and software are becoming a critical part of their health care.

Thanks to an array of small devices and applications for smartphones that gather vital health information and store it electronically, consumers can take a more active role in managing their own care, often treating chronic illnesses — and preventing acute ones — without the direct aid of a physician.

“Both health care providers and consumers are embracing smartphones as a means to improving health care,” said Ralf-Gordon Jahns, head of research at research2guidance, which follows the mobile industry.

He added that the firm’s findings “indicate that the long-expected mobile revolution in health care is set to happen.”

With a rapidly aging population in some parts of the world and curbs on government spending, the use of computer-compatible devices and online tools as part of a program of preventive medicine is a growing industry.

A report by Parks Associates in February estimated that in the United States alone, revenue from digital health technology and services would exceed $5.7 billion in 2015, compared with $1.7 billion in 2010, fueled by devices that monitor chronic conditions like hypertension and diabetes and by wellness and fitness applications and programs.

In January, the French start-up Withings introduced a Wi-Fi-enabled cuff that can take your blood pressure and pulse and that connects to an iPhone to synchronize the data with records kept online. The data can be securely stored on a personal page on the Withings Web site or with other personal health record, or P.H.R., service providers, like Google Health and Microsoft’s HealthVault, where it can then be accessed by your doctor.

In February, Entra Health Systems announced a deal with the Swedish mobile phone company Doro to make its MyGlucoHealth service available on their senior-friendly cellphones. With a small device, blood glucose level readings can be sent by text message to a secure MyGlucoHealth portal, which provides instant advice to users on what to eat. MyGlucoHealth, which the company introduced in 2008, is also compatible with Google Health and HealthVault.

Though MyGlucoHealth is already available around the world using various smartphones and simple mobile phones with Bluetooth, early promotion and partnerships centered on Britain, Australia, Germany, the United States and India. The service is about to be heavily promoted in Asia. John Hendel, chairman of Entra Health Systems, said it would be available in Hong Kong on March 29, in partnership with PCCW, and in Singapore, Taiwan, South Korea over the next three months.

“Asia has a very high number of people with mobile phones and with diabetes,” Mr. Hendel said. “It’s a market where there is a lot of genetic predisposition to diabetes, the health care system is typically underfunded and paid for by the patients, and so by coming up with a great cost-effective solution it allows us to capture a big market piece that is just as important as the U.S. market, if not more important.”

A report in November by research2guidance estimated there were more than 17,000 mobile health applications designed for smartphones and that many were aimed at and being adopted by health care professionals. It forecast mobile and wireless health care services would expand significantly to reach 500 million mobile users, or about 30 percent of an estimated 1.4 billion smartphone subscribers worldwide, by 2015.

Microsoft’s HealthVault and Google Health, introduced in the United States in 2007 and 2008 respectively, offer similar open platforms that allow people to store and manage their health information, including immunizations, disease history and prescriptions in one place, with access to the records possible via various devices or mobile applications.

The personal information is stored in a secure, encrypted database and the privacy controls are set entirely by the individual, including what information goes in and who gets to see it.

“We decided to integrate with Google Health and HeathVault because, based on their potential, a lot of people are using them as a destination site,” Mr. Hendel said. “So instead of having to click on our site and then someone else’s site, they’re acting as depository of connected links.”

HealthVault already integrates 170 health care applications, ranging from one that helps triathletes monitor their training and diet to software for managing diabetes, and the platform works with about 90 medical devices, said Mark Johnston, Microsoft’s international business development lead, from Sydney.

HealthVault’s software development kit has been downloaded 30,000 times. “I think this is the leading indicator of what will come forward,” Mr. Johnston said. “And as we continue to land new contracts and open new markets, our value proposition to our development community is only enhanced because we become the distribution channel and they can go to the market with us.”

While Google Health is at least for now confined to the United States, HealthVault has slowly rolled out its platform internationally, linking with local partners. It was deployed in Canada in 2009 through a partnership with Telus, in Germany though a deal with Siemens and in Britain through a deal with Nuffield Health in 2010.

In October, HealthVault also signed an agreement with a large systems integrator in China, iSoftStone, and the two companies are now focusing on a government program in Wuxi, Jiangsu Province.

“They have a very specific ambition which involves diabetes and hypertension patients, and we’re bringing to market a set of applications and connected devices that are being driven by our partners in China,” Mr. Johnston said. “The entire experience will be localized to their needs to also build a high level of consumer trust.”

He added that HealthVault had been concentrating since the beginning on patients with long-term conditions and chronic diseases. “And we’re trying to go really deep on application pairings with devices to treat those patients because there is a very strong value proposition for those,” Mr. Johnston said, given that governments and other health organizations incur most of their costs from these patients.

So far, adoption of online personal health records by consumers has grown slowly. According to a survey by Knowledge Networks conducted in the United States last year, 10 percent of the public now uses P.H.R.’s, compared with 3 percent in 2008. A similar proportion of doctors said they offered P.H.R. tools to their patients.

Jay Chandran, associate research director of health care at Frost & Sullivan, a research and consulting firm, said he believed patients needed to have an incentive to maintain their online personal health records. Otherwise individuals were unlikely to find the motivation to keep their medical records up to date.

“In my opinion, P.H.R. products will become more successful if data can be captured at the transaction level itself, that is, if a P.H.R. can be a subset of electronic health records, where health care service providers provide the real time information,” Mr. Chandran said. “If the records are maintained and entered by health care service providers, the credibility and utility of such data is higher.”

Source | New York Times

Wearable Sensor Reveals what Overwhelms You

Sunday, March 6th, 2011

The Q Sensor, a device made by Affectiva, constantly checks for signs of anxiety, indicated by skin conductance. A USB cable connects to a computer to analyze the data.

Astronaut scientists for hire open new research frontier in space

Sunday, March 6th, 2011

At a joint press conference Monday with Virgin Galactic at the Next-Generation Suborbital Researchers Conference, XCOR, SwRI, and others, Astronauts for Hire Inc. announced the selection of its third class of commercial scientist-astronaut candidates to conduct experiments on suborbital flights.

Among those selected was Singularity University inaugural program faculty advisor, teaching fellow, and track chair Christopher Altman, a graduate fellow at the Kavli Institute of Nanoscience, Delft University of Technology.

“The selection process was painstaking,” said Astronauts for Hire Vice President and Membership Chair Jason Reimuller. “We had to choose a handful of applicants who showed just the right balance of professional establishment, broad technical and operational experience, and a background that indicates adaptability to the spaceflight environment.”

“With the addition of these new members to the organization, Astronauts for Hire has solidified its standing as the premier provider of scientist-astronaut candidates,” said its President Brian Shiro. “Our diverse pool of astronauts in training represent more than two dozen disciplines of science and technology, speak sixteen languages, and hail from eleven countries. We can now handle a much greater range of missions across different geographic regions.”

Altman completed Zero-G and High-Altitude Physiological Training under the Reduced Gravity Research Program at NASA Ames Research Center in Silicon Valley and NASA Johnson Space Center in Houston, and was tasked to represent NASA Ames at the joint US-Japan space conference (JUSTSAP) and the launch conference (PISCES) for an astronaut training facility on the slopes of Mauna Kea Volcano on the Big Island of Hawaii.

Altman’s research has been highlighted in international press and publications including Discover Magazine and the International Journal of Theoretical Physics. He was recently awarded a fellowship to explore the foundations and future of quantum mechanics at the Austrian International Akademie Traunkirchen with Anton Zeilinger.

“The nascent field of commercial spaceflight and the unique conditions afforded by space and microgravity environments offer exciting new opportunities to conduct novel experiments in quantum entanglement, fundamental tests of spacetime, and large-scale quantum coherence,” said Altman.

Source | Kurzweil AI