Archive for the ‘Augmented Reality’ Category

Formatting Gaia + Technological Symbiosis

Friday, December 2nd, 2011

Patrick Millard | Formatting Gaia + Technological Symbiosis from vasa on Vimeo.

Bidirectional brain signals sense and move virtual objects

Saturday, October 15th, 2011

In the study, monkeys moved and felt virtual objects using only their brain (credit: Duke University)

Two monkeys trained at the Duke University Center for Neuroengineering have learned to employ brain activity alone to move an avatar hand and identify the texture of virtual objects.

“Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton,” said study leader Miguel Nicolelis, MD, PhD, professor of neurobiology at Duke University Medical Center and co-director of the Duke Center for Neuroengineering.

Sensing textures of virtual objects

Without moving any part of their real bodies, the monkeys used their electrical brain activity to direct the virtual hands of an avatar to the surface of virtual objects and differentiate their textures. Although the virtual objects employed in this study were visually identical, they were designed to have different artificial textures that could only be detected if the animals explored them with virtual hands controlled directly by their brain’s electrical activity.

The texture of the virtual objects was expressed as a pattern of electrical signals transmitted to the monkeys’ brains. Three different electrical patterns corresponded to each of three different object textures.

Because no part of the animal’s real body was involved in the operation of this brain-machine-brain interface, these experiments suggest that in the future, patients who were severely paralyzed due to a spinal cord lesion may take advantage of this technology to regain mobility and also to have their sense of touch restored, said Nicolelis.

First bidirectional link between brain and virtual body

“This is the first demonstration of a brain-machine-brain interface (BMBI) that establishes a direct, bidirectional link between a brain and a virtual body,” Nicolelis said.

“In this BMBI, the virtual body is controlled directly by the animal’s brain activity, while its virtual hand generates tactile feedback information that is signaled via direct electrical microstimulation of another region of the animal’s cortex. We hope that in the next few years this technology could help to restore a more autonomous life to many patients who are currently locked in without being able to move or experience any tactile sensation of the surrounding world,” Nicolelis said.

“This is also the first time we’ve observed a brain controlling a virtual arm that explores objects while the brain simultaneously receives electrical feedback signals that describe the fine texture of objects ‘touched’ by the monkey’s newly acquired virtual hand.

“Such an interaction between the brain and a virtual avatar was totally independent of the animal’s real body, because the animals did not move their real arms and hands, nor did they use their real skin to touch the objects and identify their texture. It’s almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves.”

The combined electrical activity of populations of 50 to 200 neurons in the monkey’s motor cortex controlled the steering of the avatar arm, while thousands of neurons in the primary tactile cortex were simultaneously receiving continuous electrical feedback from the virtual hand’s palm that let the monkey discriminate between objects, based on their texture alone.

Robotic exoskeleton for paralyzed patients

“The remarkable success with non-human primates is what makes us believe that humans could accomplish the same task much more easily in the near future,” Nicolelis said.

The findings provide further evidence that it may be possible to create a robotic exoskeleton that severely paralyzed patients could wear in order to explore and receive feedback from the outside world, Nicolelis said. The  exoskeleton would be directly controlled by the patient’s voluntary brain activity to allow the patient to move autonomously. Simultaneously, sensors distributed across the exoskeleton would generate the type of tactile feedback needed for the patient’s brain to identify the texture, shape and temperature of objects, as well as many features of the surface upon which they walk.

This overall therapeutic approach is the one chosen by the Walk Again Project, an international, non-profit consortium, established by a team of Brazilian, American, Swiss, and German scientists, which aims at restoring full-body mobility to quadriplegic patients through a brain-machine-brain interface implemented in conjunction with a full-body robotic exoskeleton.

The international scientific team recently proposed to carry out its first public demonstration of such an autonomous exoskeleton during the opening game of the 2014 FIFA Soccer World Cup that will be held in Brazil.

Ref.: Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, and Miguel A. L. Nicolelis, Active tactile exploration using a brain–machine–brain interface, Nature, October 2011 [doi:10.1038/nature10489]

Source | KurzweilAI

Brain-Computer Interface for Disabled People to Control Second Life With Thought Available Commercially Next Year

Friday, August 12th, 2011

This is an awesome use of a brain-computer interface developed for disabled people to navigate in the 3D virtual world of Second Life, using a simple interface controlled by the user’s thought:

Developed by an Austrian medical engineering firm called G.Tec, the prototype in the video above was released last year, but since New Scientist wrote about the project recently, and since it’s one of the few real world applications of Second Life that’s already showing tangible, scalable, incredibly important social results, I checked with the company for an update:

“The technology is already on the market for spelling,” G.Tec’s Christoph Guger tells me, pointing to a company called Intendix. “The SL control will be on the market in about one year.” I imagine there are many disabled people in SL right now who would benefit from this, and many more not in SL who could, once it’s on the market. (A Japanese academic created a similar brain-to-SL interface in 2007, but to my knowledge, there are no commercial plans for it as yet.)

Guger shared some insights on how the technology works, and the disabled volunteers who helped them develop it:

G. Tec test volunteers and interface, courtesy Christoph Guger

Above is a pic of the main G. Tec interface with all the basic SL commands. There are other UIs for chatting (with 55 commands) and searching (with 40 commands.)

Not surprisingly, Guger tells me their disabled volunteers enjoyed flying in Second Life most. “It is of course slower than with the keyboard/mouse,” Guger allows, “but the big advantage is that you appear as a normal user in SL, even if you are paralyzed.”

This brain-to-SL interface literally gives housebound disabled people a world to explore, and a means to meet and interact with as many people there, as live in San Francisco; that in itself is an absolute good. But beyond that, Guger sees other medical applications: “First of all you can use it for monitoring, if the patient is still engaged and as a tool to measure his performance. Beside that, it gives access to many other people, which would not be possible otherwise. New games are also developed for ADHD children for example.”

Source | New World Notes







The Biological Canvas

Tuesday, July 19th, 2011

Curatorial Statement

The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel.  Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter.  Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.

There are reasons for this refocusing of the lens or eye toward body.  Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago.  The body truly is changing, both biologically and technologically, at an abrupt rate.  Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation.  Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today.  They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves.  The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.

We see in the work of Orlan and Stelarc that the body becomes the malleable canvas.  Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution.  In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body.  Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings.  Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world.  Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses.  Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body.  Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth.  Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.

Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine.  Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance.  Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes.  Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.

The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind.  In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order.  Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing.  Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations.  Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world.  The installation includes an array of visual stimulation in a dark environment.  As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes.  This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.

Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs.  By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement.  Here we see the body acting as brush; literally painting the soundscape through its own creative motion.  As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body.  For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.

Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense.  Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with.  Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state.  The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide.  Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time.  As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge.  Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self.  Can we begin to mold and sculpt the body much like we have done with the land we inhabit?

There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori.  The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement.  In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities.  Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain.  As you browse the work of The Biological Canvas I would like to invite your own imagination to engage.  Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.

Patrick Millard

Source | VASA Project

Augmented reality has potential to reshape our lives

Saturday, June 11th, 2011

That virtual yellow first-down line superimposed on an actual football field is one of the more visible examples of a technology that is still not well known. But augmented reality is quickly emerging from obscurity and could soon dramatically reshape how we shop, learn, play and discover what is around us.

In simple terms, augmented reality is a visual layer of information — tied to your location — that appears on top of whatever reality you’re seeing. Augmented reality (AR) apps have been increasingly popping up on smartphones and camera-equipped tablets such as the iPad 2. Versions of AR also work in conjunction with webcams, special eyewear and game consoles such as Microsoft’s Xbox 360 via Kinect or the Nintendo 3DS handheld that went on sale recently.

“Extraordinary possibilities are right around the corner,” says Microsoft computer scientist Jaron Lanier. “We’re closing in on it.”

Imagine:

•Pointing your phone at a famous landmark and almost instantly receiving relevant historic or current information about your surroundings.

•Fixing a paper jam in a copy machine by pointing a device at the copier and, directed by the virtual arrows that appear, pressing in sequence the right buttons and levers.

•Visualizing what you’ll look like in a wedding dress without trying it on.

Today, luminaries of the field are gathering at the ARE 2011 (Augmented Reality Event) conference kicking off in Santa Clara, Calif., to discuss AR’s future in e-commerce, mobile, real-time search and story telling, among other areas.

In one form or another, AR technology dates back at least 30 years, says Ramesh Raskar of the Massachusetts Institute of Technology’s Media Lab, where some of the pioneering work was done. Now, a confluence of ever-improving technologies — cellphone cameras, more powerful processors, graphics chips, touch-screens, compasses, GPS and location-based technologies — are helping drive AR forward. GeoVector, Layar, Metaio, Quest Visual, Shotzoom Software, Viewdle, Total Immersion and even Google Goggles are weighing in with AR-based smartphone browsers or apps.

A recent report from Juniper Research in the United Kingdom found that an increasing number of leading brands, retailers and mobile vendors are investing in mobile augmented reality applications and services. Global revenue is expected to approach $1.5 billion by 2015, up from less than $2 million in 2010. And Juniper found that the installed base of AR-capable smartphones had increased from 8 million in 2009 to more than 100 million in 2010.

Steven Feiner, a professor of computer science at Columbia University, and one of the gurus of the field, says augmented reality can exploit all the senses, including touch and hearing. For example, imagine a virtual character following you around and whispering relevant information in your ear.

Augmented reality already has real-world applications:

Games: For some consumers, their first encounter with AR is likely to be at play. The NBA’s Dallas Mavericks recently teamed with Qualcomm and Big PlayAR on a promotion that turns a ticket into an interactive basketball game when viewed through an Android-phone. The game is the first commercial application to take advantage of a mobile augmented reality platform launched recently for Android developers by Qualcomm.

Nintendo 3DS offers an archery game that also takes advantage of AR. Aim the handheld’s camera at an innocuous-looking AR card placed on a coffee table, and watch fire-breathing three-dimensional dragons appear to rise from the surface.







Gaming publisher Ogmento’s Paranormal Activity: Sanctuary is a location-based multiplayer iPhone game that lets you project ghosts and other supernatural effects onto a real world scene.

Shopping: The Swivel Virtual Dressing Room under development from FaceCake Marketing and scheduled for retail stores and perhaps your own bedroom closet, promises to let you try on virtual duds and accessories in real time. Swivel was demonstrated recently at the Demo high-tech conference. Among the scenarios CEO Linda Smith talks about: taking consumers virtually from a store floor in Atlanta to the streets of Paris to envision what they’d look like wearing the latest spring dress in front of the Eiffel Tower. A shopper might watch rain bounce off a virtual umbrella.

EBay Classifieds takes the shopping experience in a different direction. It worked through Metaio’s mobile Junaio augmented reality browser to deliver an Android and iPhone experience that lets you point a smartphone at houses along your block and see pop-ups of any items your neighbors have put up for sale.

EBay also has an AR app that lets you try on virtual sunglasses before choosing which, if any, to buy.

EBay Mobile Vice President Steve Yankovich says the goal was to make the utility of the app 80% to 90% (of the experience), and the wow or gee-whiz factor, 10% to 20%. If it were the other way around, he asks, “What is the point?”

Frank Cooper, chief consumer engagement officer for PepsiCo Americas Beverages, concurs: “The most powerful form of AR may not be the flashy examples,” but rather “the ones that serve basic needs of people: information, entertainment, social connections.”

Still, Pepsi has shown off flash. In one early-stage example, the company worked with Rihanna on an augmented reality promotion in which you could hold a webcam in front of a code on a bag of Doritos and project an image of the singer performing a new track. Might there be similar efforts? “That’s one area we’re exploring aggressively,” Cooper says.

Still a learning curve

Still, for all of AR’s promise, its future success is by no means a slam dunk. Some of the early AR apps on smartphones are clumsy to use and unnatural. Eyewear for consumer use hasn’t been perfected. “The optics and display trickery to get the thing right — that’s not easy,” says Microsoft’s Lanier.

“For better or worse, a lot of what has been perceived as mobile AR is gimmicky,” says Jay Wright, director of business development at Qualcomm. “The challenge with AR is to find uses that solve a real problem and enable something fundamentally new, useful or uniquely entertaining.”

Bruno Uzzan, CEO of Total Immersion, the company whose technology is behind the eBay Fashion sunglasses app, says AR stops being a gimmick “when my client says I’m making more sales with AR than without it.” One such client is Hallmark Cards, which produces AR cards that come alive with animations when you hold them up to a webcam.

AR adoption won’t come easily. “In the first case, the hurdle is education — not just for consumers but for brands, developers and services providers,” says Windsor Holden, a U.K.-based analyst for Juniper Research. “There is still a pretty widespread lack of awareness as to what AR is.”

Forrester Research analyst Thomas Husson also says mobile AR is not yet delivering on its promise. But “in the years to come, this will be disruptive technology that changes the way consumers interact with their environment.”

Varied developments

The disruptions are likely to evolve in many different ways. At the MIT Media Lab, Ramesh is working on 3-D motion-tracking Second Skin technology, in which tiny sensors and a microcontroller are bound to the body through a lightweight wearable suit and used to augment and teach motor skills. Say you’re learning to dance or to juggle. The system can track your movement and provide tactile feedback that corrects your position as you go.

“Think of Second Skin as your real-time assistant,” Ramesh says. “I call it an experience overlay. I’m not playing a TV game where I’m learning how to juggle. I’m doing real juggling.”

Ramesh says the technology could cost as little as $1,000 and be in the market within a year. It could have broad reach into health and education; for example, teaching someone to perform surgery.

At Columbia, one of Feiner’s areas of focus is maintenance and repair. “I’d like to be within the task itself. If you had AR with proper (virtual) documentation, you could look at a machine, and it would show you first do this, then do that, with a little bit of extra highlighting to walk you through.”

Gazing further out, Microsoft’s Lanier says he’d like to see the road he’s driving on augmented with signs of where there’ve been accidents and traffic jams. He’d love to be able to walk into a neighborhood and see what it was like back in time —San Francisco during the Gold Rush, say.

Lanier also expects, within 15 years or so, a new futuristic outdoor national sport to materialize with virtual game elements that don’t necessarily resemble any of our current pastimes.

And he predicts way out in the future that you’ll be able to experience a physical product you might want to buy, AR versions of a chair, for example. When you find one you like, you’ll make a payment, a machine will chug, materials will somehow be piped in, and the new chair will be in your house.

For now, it seems like a pipe dream, fodder for a Jetsonian age. But consumer product strategists are already paying attention to AR.

As Cooper of PepsiCo warns his peers: “Ignore AR at your own peril.”

Source | USA Today

Metaio and Layar pinpoint next steps for augmented reality

Saturday, June 11th, 2011

Until now, mobile augmented reality has been all about smartphones, with the creation of AR content restricted to developers with specific skills. Announcements today from startups Metaio and Layar show how both companies are keen to move beyond this.

Metaio thinks that tablets will become increasingly important devices for AR, describing them as “the perfect enabler for augmented reality” as it published a video showcasing its Junaio AR technology running on slate devices.

Metaio’s bullishness is about more than just the iPad: the company thinks the new wave of tablets running Google’s Android 3.0 operating system – starting with the Motorola Xoom – will create new opportunities for innovative AR applications.

“The extreme light weight, the multiple sensors such as compass or GPS, the large screen and perfectly positioned twin cameras of the new tablets make them fascinating machines,” says Metaio in what’s a cross between a press release and a manifesto.

It also cites dual-core processors as a key factor enabling tablets to be used for AR applications including instructional guides; product information; e-commerce; entertainment and gaming.

“If you want to display for example rich media content triggered by printed material like newspapers or magazines, you need to recognize the object, process the image and render the content into the video stream tightly connected to the original image. By capturing the object on one core and by handling tracking (recognition and initialisation) on the other core, performance and user experience will be so much better.”

Metaio’s view is that AR is “more than a marketing gimmick or hype, it’s actually an interface revolution”. However, there are currently relatively few companies able to take part in this revolution, since creating AR content remains the preserve of developers willing and able to get to grips with the tools.

That’s something Metaio’s rival Layar is hoping to change with its own announcement today of an initiative called Layar Connect. It’s all tools to help more people create content and services around Layar’s AR platform, with the help of external companies who build these tools.

“We’re focused on the democratisation of augmented reality and want to make it easier to create and publish AR content for all,” says Maarten Lens-FitzGerald, co-founder and general manager of Layar, in a statement.

“With Layar Connect, we are the first in the industry to move management and publication of AR content to third parties. This creates opportunities for Layar partners to add increased value to their business – a big step in the professionalisation of the AR industry.”

Augmented reality itself is hardly a young technology by web standards, but the buzz around mobile augmented reality is a more recent phenomenon, thanks to the growing popularity of smartphones (and yes, now tablets) with the grunt to handle AR – not to mention the faster connectivity and GPS sensors.

Companies including BuildAR, Poistr, Visar and Poiz – the AR space is thoroughly Web 2.0 in its startup naming conventions – are already using Layar Connect, with more to come.

Metaio’s point about augmented reality being a new interface with many uses rather than a specific type of app is key, though. Layar’s decision to open up the creation of AR content to a wider audience can only reinforce that.

Metaio is building its own network of developers and brands using its own technology. The competition between the two, along with Qualcomm, Google and other companies training their sights on augmented reality, should fuel a host of innovative ideas in the months to come.

Source | The Guardian

NASA’s Next-Gen Spacesuit Could Have an In-Helmet Display

Tuesday, June 7th, 2011

econ Instruments' Transcend Goggles

Though NASA holds the keys to some of the most sophisticated technologies ever to make it into low Earth orbit, the spacesuits that astronauts wear up there are still in many ways similar to those worn during the Apollo missions of the 1960s and 1970s. Fortunately for future astronauts, they may get a next-gen visual upgrade via a piece of technology that is coming down from the mountaintop at this year’s Desert Research and Technology Studies (RATS).

Vancouver-based Recon Instruments, maker of GPS-enabled ski goggles with in-goggle displays tucked in the peripheral, is sending its technology to NASA for potential inclusion in the next generation of spacesuit helmets in which mission critical information and checklists could appear right before astronauts eyes. NASA’s spacesuit designers have been toying with the idea of an in-helmet displays for a while now, and considering that spacewalking astronauts currently rely on paper checklists taped to their arms, such a display represents a pretty big technological leap forward.

Recon has some experience packing display tech into small, lightweight packages. Its current technology tucks a tiny LCD screen right into the frame of a ski google and lets downhill daredevils access information like GPS location, temperature, altitude, and maps right on the edge of their fields of view. For astronauts, the idea would be quite similar, though given the increased real estate inside a spacesuit visor the possibilities for the display are even more ranging.

Say an astronaut is performing repairs on the outside of the international space station or on an orbiting satellite. The astronaut could call up his mission checklists when needed (voice commands? Yeah, let’s see if we can integrate some voice commands in there) and put them out of sight when they aren’t necessary. But crew inside the ship/station could also beam him or her diagrams, schematics, and detailed instructions on how to perform repairs on the fly, making missions more nimble.

It’s conceivable that a full heads-up display and even augmented reality might at some point be integrated into the helmet, making it easier for astronauts to identify mission targets and components and quickly find things they are looking for. Engineers could even upload a demo to an astronauts HUD talking him or her through an unexpected repair that wasn’t covered in training.

But one step at a time. Desert RATS takes place every year in Arizona and gives NASA engineers the opportunity to work alongside researchers and scientists from around the country on technology development and to field test technologies that NASA might potentially want to tap. If Recon’s HUD technology makes the cut, those paper checklists might soon (finally) get a space-age upgrade.

Source | Popular Science

Humanity+ @ Parsons The New School For Design, Transhumanism Meets Design

Tuesday, April 26th, 2011

Patrick Millard | Formatting Gaia + Embodiment

Tuesday, April 12th, 2011

Video games and VR help stroke patients recover mobility

Sunday, April 10th, 2011

Sensor glove may help stroke patients recover hand motion

Two Canadian university research programs have found significant improvements in hand motion and strength in stroke patients.

A Biomedical Sensor Glove that helps stroke patients recover hand motion by playing video games has been developed by engineers at McGill University.

The glove allows patients to exercise in their own homes with minimal supervision. Patients can monitor their own progress using software to generate 3D models and display them on the screen, while sending information to a treating physician.

Similar gloves currently cost up to $30,000. By using more accurate and less expensive sensors, the engineers were able to develop a glove that currently costs $1000 to produce. It is hoped to eventually go on the market for about $500.

Researchers at the Stroke Outcomes Research Unit at St. Michael’s Hospital at the University of Toronto have also shown that video games, as well as VR systems, lead to significant improvement in arm strength following stroke.

Researchers analyzed seven observational and five randomized trials, representing a total of 195 patients, ages 26 to 88, who had suffered mild to moderate strokes. Each study had investigated the effects of electronic games on upper arm strength and function.

Most patients played 20 to 30 hours during four to six weeks of therapy on one of several computer-based technology systems: three traditional video game systems  and nine virtual reality systems, including Virtual Teacher, CyberGlove, VR Motion, PneuGlove, and Wii.

The researchers found that there was an average 14.7 percent improvement in muscular strength after playing virtual reality games, and a 20 percent average improvement in the ability to perform standard tasks.

Ref.: Gustavo Saposnik et al., Virtual Reality in Stroke Rehabilitation: A Meta-Analysis and Implications for Clinicians, April 7 online edition, Stroke: Journal of the American Heart Association

Augmented reality app overlays designs on a landscape

Sunday, April 10th, 2011

Sheffield gets a virtual makeover

IT IS the dream app for Nimbys everywhere: an augmented reality (AR) iPhone app that allows you to visualise what new developments will look like. That means you can complain, if necessary before construction begins, which could make life easier for town planners.

Interested parties can view a 3D digital model of the proposed build in situ, so they can work out how it might affect them, says Eckart Lange, head of landscape planning at the University of Sheffield in the UK.

He has been looking at different visualisation tools as part of a project called the Urban River Corridors and Sustainable Living Agendas, which aims to regenerate urban rivers. With the Walkabout 3D Mobile app installed on their iPhone or iPad, visitors to a building site can view the 3D model, created with Google’s tool SketchUp, overlaid on the landscape. They can check if the work will overlook their property, block out sunlight or simply be an eyesore, he says.

However, unlike some AR apps, this one doesn’t actually let you virtually walk through the area, says Ed Morgan of Deliverance Software, the app’s creator. This is because 3D models are often extremely large files, far too big for a mobile device to continuously update in real time. Instead, a digital map guides you to the locations where it will work. There, you move your iPhone or iPad around and the inbuilt digital compass and GPS locator let you view virtual, static 3D panoramic views of the site, downloaded to your device over the 3G network, Morgan says.

While it is unlikely to be used any day soon for minor developments, such as kitchen extensions or loft conversions, this kind of visualisation tool could prove to be very helpful to planners for larger projects, says Tom Wilde, director of South Yorkshire Forest Partnership, part of Sheffield City Council’s planning department, where he has been trying out the app.

Planners often have to use models that are not very realistic or interactive, says Wilde. “It’s really valuable to be able to show people in the fresh air what future landscapes will look like alongside existing ones.”

So far the AR app has been used as part of the planning application to build a new park in the heart of Sheffield, which has walls that can act as flood defences.





Source | New Scientist

Store data in your body

Saturday, April 2nd, 2011






Floppy discs? Too 1980s. Thumb drives? Too easy to lose. Anyway, who needs a thumb drive when you can store data in your thumb? A new program called Sparsh lets you transfer files from one device to another simply by touching the screen – and you don’t have to join the Borg collective first.

Transferring files from one computer to another is a major pain. Even cloud-based storage like Dropbox is still irritatingly complicated. Now Pranav Mistry of the Media Lab at the Massachusetts Institute of Technology has the solution. He gets that what we really want is to just pick up stuff from one machine and put in the other, as we do with a physical object.

Mistry has designed a system to make this as simple as it could possibly be. “The user touches a data item they wish to copy from a device, conceptually saving it in the user’s body,” he says. “Next, the user touches the other device to which they want to paste the saved content.”

For example, say you look up the phone number for the local pizza place on your laptop. Normally you then have to type all those numbers into your phone; but if both devices are running Sparsh, you simply touch the phone number on your laptop’s screen, then touch your smartphone’s keypad. The system knows that what you have transferred is the phone number and automatically dials it.

It’s like magic

Behind the scenes, the first touch copies the phone number to a temporary file in either a Dropbox or an FTP account. The second touch retrieves the data. This requires both devices to be running the software and for a user to be signed into their Dropbox or FTP account. It works for any type of data, be it a photo, an address or a link to a YouTube clip.

Right now Sparsh runs as an application on smartphones, tablets and other computers. But, Mistry says, “the ideal home for Sparsh is to be built into an OS, so that it can provide the copy-paste feature across all applications”. He says it’s currently possible to incorporate this into Google’s Android mobile operating system and that his team has also implemented a browser-based version.

Sparsh was presented at the Computer Supported Cooperative Work conference in Hangzhuo, China last week. It is not known if it will work for Time Lords, or as a pleasurable cure for hay fever, however.

Source | New Scientist

Charlie Rose interview with Ray Kurzweil and director Barry Ptolemy now online

Saturday, March 26th, 2011

Ray Kurzweil

Ray Kurzweil and Barry Ptolemy appeared on the Charlie Rose show Friday night to discuss the movie Transcendent Man, directed by Barry Ptolemy. You can see the interview here. You can also watch “In Charlie’s Green Room with Ray Kurzweil,” recorded the same evening.

Transcendent Man by Barry Ptolemy focuses on the life and ideas of Ray Kurzweil. It is currently available on iTunes in the United States and Canada and on DVD. Tickets to the London and San Francisco screenings in April are available.

The dangers of ‘e-personality’

Monday, March 14th, 2011

Excessive use of the Internet, cell phones, and other technologies can cause us to become more impatient, impulsive, forgetful and narcissistic according to a new book on “e-personality,” says psychiatrist Elias Aboujaoude, MD, clinical associate professor of psychiatry and behavioral sciences and director of Stanford University’s impulse control and obsessive-compulsive disorder clinics, in a new book, Virtually You: The Dangerous Powers of the E-Personality.

Drawing from his clinical work and personal experience, he discusses the Internet’s psychological impact and how our online traits are unconsciously being imported into our our offline lives.

Source | Kurzweil AI

Augmented reality, machine learning and the cleric

Saturday, February 12th, 2011

An augmented reality app for the Apple iPhone from the Museum of London lays historic images over London landmarks Photo: Museum of London

When Presbyterian minister and mathematician Thomas Bayes put quill to paper in the 18th century, little could he know that one day his equations would help meld the virtual and physical world.

But more than 200 years after Bayes’ death, Mike Lynch, CEO of Europe’s second largest software company, Autonomy, is arguing that machine-learning software built on Bayes’ theorem on probabilistic relationships will underpin the next major shift in computing – the move to augmented reality (AR).

“One of the biggest areas [of computing] is going to be in the area of augmented reality – it takes the online world and slaps it right in the middle of the real world,” Lynch said, speaking to silicon.com at the recent Intellect Annual Regent Conference 2011.

Today, augmented reality apps run on smartphones, layering digital information over video of the real world taken by the phone camera in real time. But in future, the apps could lay digital information directly over everything we see, using screens built into glasses or contact lenses.

Smartphone apps already exist that do things such as lay historic photos over images of London landmarks, but Lynch said AR will eventually permeate our lives – putting the digital world at the heart of everyday interactions.

“Perhaps a printed poster on the wall becomes animated, and you can click on it and buy the DVD – suddenly what was a simple ad becomes a way you can buy something,” he said.

“Or you’re walking around London and you hold up your phone to a statue of Eros and it tells you the history of it.

“Or you meet someone on the street, hold up your phone and it tells you about what they’re interested in, and maybe in the virtual world they also have a parrot sitting on their shoulder.

“It’s a completely different way of interacting with vast amounts of information in situ and in context.”

Machine learning

Many AR apps available today rely on GPS and digital compasses to work out what the phone is pointing at and what information to display, but future AR apps will

increasingly need to understand what the user is looking at and what digital information they want to see, a process that will require machine learning.

“Everything we are talking about comes down to the ability of the computer to understand what something means,” Lynch said.

“It’s the mathematics of Thomas Bayes that allows computers to learn what things mean.

“It’s a self-learning system – so basically by reading the newspapers [a machine] learns all about our world. For example, a computer could learn that ‘Becks’ is David Beckham, and that he’s married to ‘Posh’, that he’s very good at football and a bit of a fashion icon.”

Drowning in data

Lynch’s vision of the near future is a nice fit for Autonomy and its specialism in machine-learning and pattern-recognition software that can analyse unstructured data – information that has not been labelled and linked to other information inside a database, where it can be read and understood by a machine.

Since Autonomy was founded in Cambridge in 1996, the company has been helping businesses tackle the tide of unstructured information that flows into a modern business.

Today, Autonomy has a market capitalisation of $7bn and a customer list that includes more than 20,000 major organisations worldwide – including BAE Systems, the BBC, GlaxoSmithKline, Nasa and the Houses of Parliament.

The amount of unstructured information – whether it is text in an email or an audio recording of a phone call – is growing so quickly that Lynch believes organisations will soon have no choice but to task machines with analytical work that previously would have been the preserve of humans.

“Some 85 per cent of what you deal with at work is unstructured information,” Lynch said.

“You can replace people in lots of tasks where people are looking at unstructured information – for example, reading an email and routing it to someone else, looking at security camera footage or going through documents to find which are relevant to a law suit.

“If you can get a computer to do [those tasks] then that’s a phenomenal saving, and it frees up the human to do something more interesting.

“It’s going to have to be that way because the amount of unstructured information is growing at 67 per cent [each year] – so if you are going to use people you better get breeding.”

Perhaps in a nod to the rise of AR, Lynch said the most valuable lesson he had learnt since starting Autonomy was that the tech industry is built on shifting sands.

“We always think everything is set in stone and this is how it is. For example, Microsoft dominates the industry. The one thing you learn is nothing is set in stone, all the stones are moving, there’s incredible opportunity all over the place and the fat lady has not sung,” he said.

But even as technology accelerates the pace of change, and the digital world becomes intertwined with the physical, Lynch takes comfort that, AR future or not, some things will never change.

“I live in Suffolk and the nice thing about Suffolk is that the conversation down the pub is the same as it has been for the last 500 years, which is ‘How do you get rid of moles?’,” he said.

And as confident as he is in taming the world’s information, Lynch admits this is one challenge that has got him beat, conceding: “It probably will always be an unsolvable problem.”

Source | Silicon