Archive for the ‘Avatars’ Category

Formatting Gaia + Technological Symbiosis

Friday, December 2nd, 2011

Patrick Millard | Formatting Gaia + Technological Symbiosis from vasa on Vimeo.

Bidirectional brain signals sense and move virtual objects

Saturday, October 15th, 2011

In the study, monkeys moved and felt virtual objects using only their brain (credit: Duke University)

Two monkeys trained at the Duke University Center for Neuroengineering have learned to employ brain activity alone to move an avatar hand and identify the texture of virtual objects.

“Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton,” said study leader Miguel Nicolelis, MD, PhD, professor of neurobiology at Duke University Medical Center and co-director of the Duke Center for Neuroengineering.

Sensing textures of virtual objects

Without moving any part of their real bodies, the monkeys used their electrical brain activity to direct the virtual hands of an avatar to the surface of virtual objects and differentiate their textures. Although the virtual objects employed in this study were visually identical, they were designed to have different artificial textures that could only be detected if the animals explored them with virtual hands controlled directly by their brain’s electrical activity.

The texture of the virtual objects was expressed as a pattern of electrical signals transmitted to the monkeys’ brains. Three different electrical patterns corresponded to each of three different object textures.

Because no part of the animal’s real body was involved in the operation of this brain-machine-brain interface, these experiments suggest that in the future, patients who were severely paralyzed due to a spinal cord lesion may take advantage of this technology to regain mobility and also to have their sense of touch restored, said Nicolelis.

First bidirectional link between brain and virtual body

“This is the first demonstration of a brain-machine-brain interface (BMBI) that establishes a direct, bidirectional link between a brain and a virtual body,” Nicolelis said.

“In this BMBI, the virtual body is controlled directly by the animal’s brain activity, while its virtual hand generates tactile feedback information that is signaled via direct electrical microstimulation of another region of the animal’s cortex. We hope that in the next few years this technology could help to restore a more autonomous life to many patients who are currently locked in without being able to move or experience any tactile sensation of the surrounding world,” Nicolelis said.

“This is also the first time we’ve observed a brain controlling a virtual arm that explores objects while the brain simultaneously receives electrical feedback signals that describe the fine texture of objects ‘touched’ by the monkey’s newly acquired virtual hand.

“Such an interaction between the brain and a virtual avatar was totally independent of the animal’s real body, because the animals did not move their real arms and hands, nor did they use their real skin to touch the objects and identify their texture. It’s almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves.”

The combined electrical activity of populations of 50 to 200 neurons in the monkey’s motor cortex controlled the steering of the avatar arm, while thousands of neurons in the primary tactile cortex were simultaneously receiving continuous electrical feedback from the virtual hand’s palm that let the monkey discriminate between objects, based on their texture alone.

Robotic exoskeleton for paralyzed patients

“The remarkable success with non-human primates is what makes us believe that humans could accomplish the same task much more easily in the near future,” Nicolelis said.

The findings provide further evidence that it may be possible to create a robotic exoskeleton that severely paralyzed patients could wear in order to explore and receive feedback from the outside world, Nicolelis said. The  exoskeleton would be directly controlled by the patient’s voluntary brain activity to allow the patient to move autonomously. Simultaneously, sensors distributed across the exoskeleton would generate the type of tactile feedback needed for the patient’s brain to identify the texture, shape and temperature of objects, as well as many features of the surface upon which they walk.

This overall therapeutic approach is the one chosen by the Walk Again Project, an international, non-profit consortium, established by a team of Brazilian, American, Swiss, and German scientists, which aims at restoring full-body mobility to quadriplegic patients through a brain-machine-brain interface implemented in conjunction with a full-body robotic exoskeleton.

The international scientific team recently proposed to carry out its first public demonstration of such an autonomous exoskeleton during the opening game of the 2014 FIFA Soccer World Cup that will be held in Brazil.

Ref.: Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, and Miguel A. L. Nicolelis, Active tactile exploration using a brain–machine–brain interface, Nature, October 2011 [doi:10.1038/nature10489]

Source | KurzweilAI

How to communicate better in virtual worlds

Saturday, October 15th, 2011

The experimental setup. Left: The participants wore a total of six tracked objects; right: the corresponding virtual environment, showing the avatars in the self-animated third-person perspective. (Credit: Trevor J. Dodds et al./PLoS One)

Mapping real-world motions to “self-animated” virtual avatars, using body tracking to communicate a wide range of gestures, helps people communicate better in virtual worlds like Second Life, says researchers from the Max Planck Institute for Biological Cybernetics and Korea University.

They conducted two experiments to investigate whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other.

Ref.: Trevor J. Dodds et al., Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments, PLoS One, DOI: 10.1371/journal.pone.0025759 (free access)

Source | KurzweilAI

The Biological Canvas

Tuesday, July 19th, 2011

Curatorial Statement

The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel.  Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter.  Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.

There are reasons for this refocusing of the lens or eye toward body.  Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago.  The body truly is changing, both biologically and technologically, at an abrupt rate.  Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation.  Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today.  They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves.  The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.

We see in the work of Orlan and Stelarc that the body becomes the malleable canvas.  Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution.  In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body.  Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings.  Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world.  Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses.  Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body.  Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth.  Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.

Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine.  Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance.  Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes.  Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.

The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind.  In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order.  Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing.  Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations.  Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world.  The installation includes an array of visual stimulation in a dark environment.  As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes.  This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.

Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs.  By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement.  Here we see the body acting as brush; literally painting the soundscape through its own creative motion.  As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body.  For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.

Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense.  Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with.  Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state.  The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide.  Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time.  As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge.  Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self.  Can we begin to mold and sculpt the body much like we have done with the land we inhabit?

There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori.  The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement.  In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities.  Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain.  As you browse the work of The Biological Canvas I would like to invite your own imagination to engage.  Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.

Patrick Millard

Source | VASA Project

Humanity+ @ Parsons The New School For Design, Transhumanism Meets Design

Tuesday, April 26th, 2011

Patrick Millard | Formatting Gaia + Embodiment

Tuesday, April 12th, 2011

Charlie Rose interview with Ray Kurzweil and director Barry Ptolemy now online

Saturday, March 26th, 2011

Ray Kurzweil

Ray Kurzweil and Barry Ptolemy appeared on the Charlie Rose show Friday night to discuss the movie Transcendent Man, directed by Barry Ptolemy. You can see the interview here. You can also watch “In Charlie’s Green Room with Ray Kurzweil,” recorded the same evening.

Transcendent Man by Barry Ptolemy focuses on the life and ideas of Ray Kurzweil. It is currently available on iTunes in the United States and Canada and on DVD. Tickets to the London and San Francisco screenings in April are available.

2011 preview: Enter the robot self

Monday, January 3rd, 2011

Your new colleague

This could be the year when we quit dragging ourselves to work and send remote-controlled robot avatars instead

Why drag yourself to work through rush-hour traffic when you can stay at home and send a remote-controlled robot instead?

Firms in the US and Japan are already selling robot avatars that allow office workers to be in two places at once. So 2011 could be the year when many of us find ourselves sitting across the desk from an electronic colleague.

Californian company Willow Garage is developing a so-called telepresence robot called Texai, while Anybots, also in California, recently launched the QB office bot.

The QB, which looks like a small Segway vehicle with a robot head on top, can travel at 6 kilometres per hour, using a laser scanner to avoid books and other office clutter.

It can be controlled via a web browser from anywhere in the world and has camera eyes to allow you to navigate your robot’s surroundings and see who you are talking to. A small LCD screen on the head means your colleagues can see you too.

You could argue that if you were planning to talk to people in other offices you could just use a videoconferencing system rather than a $15,000 robot. But logging into a robot bodyMovie Camera allows people to move around in a relatively normal way, says Trevor Blackwell of Anybots.

“If you have a bunch of people who are all used to talking to each other wherever they want to, it is a bit of an imposition to say, ‘OK, from now on all conversations have to be in the videoconferencing room’.”

Talking to a robot colleague might feel strange at first, but people seem to get used to it quite quickly. “Someone recently came to the office asking for me, and a colleague told them they had just seen me,” says Blackwell. “But actually it was the robot they had seen. I was still at home.”

Source | New Scientist

Technology 25 Years Hence

Thursday, December 30th, 2010

“The law of accelerating returns is the only reliable method I know that allows us to forecast at least certain aspects of the future,” said Ray Kurzweil in “Why Do We Need Predictions?,” a New York Times special feature published Monday.

“A computer that fit inside a building when I was a student now fits in my pocket, and is a thousand times more powerful despite being a million times less expensive. In another quarter century, that capability will fit inside a red blood cell and will again be a billion times more powerful per dollar.”

Source | Kurzweil AI

Top 5 Human Enhancement Must Haves

Tuesday, December 7th, 2010

The World Health Organization estimates that 25 million people worldwide are affected by over 1,000 genetic conditions.  There are approximately 24.6 million people alive today that have been diagnosed with cancer within the last five years. In the United States alone, 101,000 people are currently waiting for an organ transplant, and the number grows 300 people each month, according the Mayo Clinic.

Human intelligence has not advanced at the speed of accelerating technologies.  What can we do to advance our own human physiology?  The following is a list of possible transhuman must haves for the 21st century:

1.  Brain Enhancement: Metabrain prosthetic, which includes
an observational feedback field
•  AGI decision assistant
•  cognitive error correction task pane with auto-correct options
•  multiple viewpoints window with drop down elements

2. Body Enhancement:
Whole-Body prosthetics, which includes:
•  In vivo fiber optic spine
•  Atmospheric physical stimuli sensors
•  Solar protective nanoskin
•  Regenerative organs
•  Exoskeleton mobility device
•  Replaceable genes

3. Behavior Enhancement: Psychology prosthetic, which includes:
•  Awareness Customization
•  Connectivity macros
•  Empathic ping layers
•  Finessed emotions helper
•  Persona multiplier and recorder
•  Seamless relay between platforms, avatars and syn-bios

4.  Style Enhancement:  Aesthetics prosthetic, which includes:
•  Wearable or in-system options
•  Transhuman haute couture clipboard
•  Radical style click and drag option
•  Day-to-night shape shifting
•  Customized tone, texture and hue change

5.  System Care:  Warranty prosthetic, which includes:
•  Additional 24 chromosomal pairing
•  Guarantee for genetic or code mutations or defects
•  Upgradable immune system and anti-virus system

Natasha Vita-More is a media artist/designer, Founder and Director of Transhumanist Arts & Culture, and Artistic Director of H+ Laboratory.

Source | H+ Magazine

A Step Towards Idoru?

Thursday, November 25th, 2010

Pop princess Hatsune Miku is storming the music scene.

With her long cerulean pigtails and her part-schoolgirl, part-spy outfit, she’s easy on the eyes. Yes, her voice sounds like it might have gone through a little –- OK, a lot –- of studio magic. Legions of screaming fans and the requisite fan sites? She’s got ‘em.

And, like many of her hot young singer peers, Miku is extremely, proudly fake. Like, 3-D hologram fake.

Miku is a singing, digital avatar created by Crypton Future Media that customers can purchase and then program to perform any song on a computer.

Crypton uses voices recorded by actors and runs them through Yamaha Corp.’s Vocaloid software -– marketed as “a singer in a box.” The result: A synthesized songstress that sounds far better than you ever have in your shower.

Crypton has even set up a record label called KarenT, with its own YouTube channel. The Vocaloidism blog has more details about the software.

A few months ago, a 3-D projection of Miku pranced around several stadium stages as part of a concert tour, where capacity crowds waved their glow sticks and sang along.  Here’s the starlet performing a jingle titled, appropriately, “World Is Mine.”




The Blu-ray and DVD recordings of those events were recently released, according to SingularityHub, which also has more videos.

The virtual diva’s albums have also topped the Japanese charts. She’s on Facebook. We’ve seen living, breathing musicians at the Hollywood Bowl get less love.

It all reminds us a bit of S1m0ne. Remember her? She’s the sultry actress who captivated adoring audiences in the eponymous 2002 film. She was also completely computer-generated by Al Pacino’s character.

Somewhere, we bet she’s a little bit jealous.

Source | New York Times

With Kinect Controller, Hackers Take Liberties

Thursday, November 25th, 2010

When Oliver Kreylos, a computer scientist, heard about the capabilities of Microsoft’s new Kinect gaming device, he couldn’t wait to get his hands on it. “I dropped everything, rode my bike to the closest game store and bought one,” he said.

But he had no interest in playing video games with the Kinect, which is meant to be plugged into an Xbox and allows players to control the action onscreen by moving their bodies.

Mr. Kreylos, who specializes in virtual reality and 3-D graphics, had just learned that he could download some software and use the device with his computer instead. He was soon using it to create “holographic” video images that can be rotated on a computer screen. A video he posted on YouTube last week caused jaws to drop and has been watched 1.3 million times.

Mr. Kreylos is part of a crowd of programmers, roboticists and tinkerers who are getting the Kinect to do things it was not really meant to do. The attraction of the device is that it is outfitted with cameras, sensors and software that let it detect movement, depth, and the shape and position of the human body.

Companies respond to this kind of experimentation with their products in different ways — and Microsoft has had two very different responses since the Kinect was released on Nov. 4. It initially made vague threats about working with law enforcement to stop “product tampering.” But by last week, it was embracing the benevolent hackers.

“Anytime there is engagement and excitement around our technology, we see that as a good thing,” said Craig Davidson, senior director for Xbox Live at Microsoft. “It’s naïve to think that any new technology that comes out won’t have a group that tinkers with it.”

Microsoft and other companies would be wise to keep an eye on this kind of outside innovation and consider wrapping some of the creative advances into future products, said Loren Johnson, an analyst at Frost & Sullivan who follows digital media and consumer electronics.

“These adaptations could be a great benefit to their own bottom line,” he said. “It’s a trend that is undeniable, using public resources to improve on products, whether it be the Kinect or anything else.”

Microsoft invested hundreds of millions of dollars in Kinect in the hopes of wooing a broader audience of gamers, like those who enjoy using the motion-based controllers of the Nintendo Wii.

Word of the technical sophistication and low price of the device spread quickly in tech circles.

Building a device with the Kinect’s capabilities would require “thousands of dollars, multiple Ph.D.’s and dozens of months,” said Limor Fried, an engineer and founder of Adafruit Industries, a store in New York that sells supplies for experimental hardware projects. “You can just buy this at any game store for $150.”

On the day the Kinect went on sale, Ms. Fried and Phillip Torrone, a designer and senior editor of Make magazine, which features do-it-yourself technology projects, announced a $3,000 cash bounty for anyone who created and released free software allowing the Kinect to be used with a computer instead of an Xbox.

Microsoft quickly gave the contest a thumbs-down. In an interview with CNet News, a company representative said that it did not “condone the modification of its products” and that it would “work closely with law enforcement and product safety groups to keep Kinect tamper-resistant.”

That is not much different from the approach taken by Apple, which has released software upgrades for its iPhone operating system in an effort to block any unsanctioned hacks or software running on its devices.

But other companies whose products have been popular targets for tinkering have actively encouraged it. One example is iRobot, the company that makes the Roomba, a small robotic vacuum cleaner. That product was so popular with robotics enthusiasts that the company began selling the iRobot Create, a programmable machine with no dusting capabilities.

Mr. Davidson said Microsoft now had no concerns about the Kinect-hacking fan club, but he said the company would be monitoring developments. A modification that compromises the Xbox system, violates the company’s terms of service or “degrades the experience for everyone is not something we want,” he said.

Other creative uses of the Kinect involve drawing 3-D doodles in the air and then rotating them with a nudge of the hand, and manipulating colorful animated puppets on a computer screen. Most, if not all, of the prototypes were built using the open-source code released as a result of the contest sponsored by Ms. Fried and Mr. Torrone, which was won by Hector Martin, a 20-year-old engineering student in Spain.

The KinectBot, cobbled together in a weekend by Philipp Robbel, a Ph.D. candidate at the Massachusetts Institute of Technology, combines the Kinect and an iRobot Create. It uses the Kinect’s sensors to detect humans, respond to gesture and voice commands, and generate 3-D maps of what it is seeing as it rolls through a room.

Mr. Robbel said the KinectBot offered a small glimpse into the future of machines that could aid in the search for survivors after a natural disaster.

“This is only the tip of the iceberg,” he said of the wave of Kinect experimentation. “We are going to see an exponential number of videos and tests over the coming weeks and months as more people get their hands on this device.”

Toying around with the Kinect could go beyond being a weekend hobby. It could potentially lead to a job. In late 2007, Johnny Lee, then a graduate student at Carnegie Mellon, was so taken by the Wii that he rigged a system that would allow it to track his head movements and adjust the screen perspective accordingly.

A video of Mr. Lee demonstrating the technology was a hit on YouTube, as were his videos of other Wii-related projects. By June 2008, he had a job at Microsoft as part of the core team working on the Kinect software that distinguishes between players and parts of the body.

“The Wii videos made me much more visible to the products people at Xbox,” Mr. Lee said. “They were that much more interested in me because of the videos.”

Mr. Lee said he was “very happy” to see the response the Kinect was getting among people much like himself. “I’m glad they are inspired and that they like the technology,” he said. “I think they’ll be able to do really cool things with it.”

Source | New York Times

In Cybertherapy, Avatars Assist With Healing

Thursday, November 25th, 2010

Advances in artificial intelligence and computer modeling are allowing therapists to practice “cybertherapy” more effectively, using virtual environments to help people work through phobias, like a fear of heights or of public spaces.

Researchers are populating digital worlds with autonomous, virtual humans that can evoke the same tensions as in real-life encounters. People with social anxiety are struck dumb when asked questions by a virtual stranger. Heavy drinkers feel strong urges to order something from a virtual bartender, while gamblers are drawn to sit down and join a group playing on virtual slot machines.

In a recent study,  researchers at USC found that a virtual confidant elicits from people the crucial first element in any therapy: self-disclosure.The researchers are incorporating the techniques learned from this research into a virtual agent being developed for the Army, called SimCoach. Guided by language-recognition software, SimCoach — there are several versions, male and female, young and older, white and black — appears on a computer screen and can conduct a rudimentary interview, gently probing for possible mental troubles.

And research at the University of Quebec suggests where virtual humans are headed: realistic three-dimensional forms that can be designed to resemble people in the real world.

Source | New York Times

Three-dimensional moving holograms breakthrough announced

Sunday, November 14th, 2010

A team led by University of Arizona (UA) optical sciences professor Nasser Peyghambarian has developed a new type of “holographic telepresence” that allows remote projection of a three-dimensional, moving image without the need for special eyewear such as 3D glasses or other auxiliary devices.

The technology is likely to take applications ranging from telemedicine, advertising, updatable 3D maps and entertainment to a new level.

The journal Nature chose the technology to feature on the cover of its Nov. 4 issue.

“Holographic telepresence means we can record a three-dimensional image in one location and show it in another location, in real-time, anywhere in the world,” said Peyghambarian, who led the research effort.

“Holographic stereography has been capable of providing excellent resolution and depth reproduction on large-scale 3D static images,” the authors wrote, “but has been missing dynamic updating capability until now.”

“At the heart of the system is a screen made from a novel photorefractive material, capable of refreshing holograms every two seconds, making it the first to achieve a speed that can be described as quasi-real-time,” said Pierre-Alexandre Blanche, an assistant research professor in the UA College of Optical Sciences and lead author of the Nature paper.

The prototype device uses a 10-inch screen, but Peyghambarian’s group is already successfully testing a much larger version with a 17-inch screen. The image is recorded using an array of regular cameras, each of which views the object from a different perspective. The more cameras that are used, the more refined the final holographic presentation will appear.

That information is then encoded onto a fast-pulsed laser beam, which interferes with another beam that serves as a reference. The resulting interference pattern is written into the photorefractive polymer, creating and storing the image. Each laser pulse records an individual “hogel” in the polymer. A hogel (holographic pixel) is the three-dimensional version of a pixel, the basic units that make up the picture.

The hologram fades away by natural dark decay after a couple of minutes or seconds depending on experimental parameters. Or it can be erased by recording a new 3D image, creating a new diffraction structure and deleting the old pattern.

Peyghambarian explained: “Let’s say I want to give a presentation in New York. All I need is an array of cameras here in my Tucson office and a fast Internet connection. At the other end, in New York, there would be the 3D display using our laser system. Everything is fully automated and controlled by computer. As the image signals are transmitted, the lasers inscribe them into the screen and render them into a three-dimensional projection of me speaking.”

The overall recording setup is insensitive to vibration because of the short pulse duration and therefore suited for industrial environment applications without any special need for vibration, noise or temperature control.

One of the system’s major hallmarks never achieved before is what Peyghambarian’s group calls full parallax: “As you move your head left and right or up and down, you see different perspectives. This makes for a very life-like image. Humans are used to seeing things in 3D.”

The work is a result of a collaboration between the UA and Nitto Denko Technical, or NDT, a company in Oceanside, Calif. NDT provided the polymer sample and media preparation. “We have made major advances in photorefractive polymer film fabrication that allow for the very interesting 3D images obtained in our Nature article,” said Michiharu Yamamoto, vice president at NDT and co-author of the paper.

Potential applications of holographic telepresence include advertising, updatable 3D maps and entertainment. Telemedicine is another potential application: “Surgeons at different locations around the world can observe in 3D, in real time, and participate in the surgical procedure,” the authors wrote.

The system is a major advance over computer-generated holograms, which place high demands on computing power and take too long to be generated to be practical for any real-time applications.

Currently, the telepresence system can present in one color only, but Peyghambarian and his team have already demonstrated multi-color 3D display devices capable of writing images at a faster refresh rate, approaching the smooth transitions of images on a TV screen. These devices could be incorporated into a telepresence setup in the near future.




Source | University of Arizona

Augmented Reality Goggles

Sunday, November 14th, 2010

I held a black-and-white square of cardboard in my hand and watched as a dragon the size of a puppy appeared on top of it and roared at me. I watched a tiny Earth orbit around a real soda can, saw virtual balls fall through a digital gap in a table, and viewed a life-sized virtual human sitting in an empty chair.

What made these impressive special effects possible was a pair of augmented reality (AR) glasses—specifically, the Wrap 920AR glasses from Vuzix. Whereas virtual reality shows you only a digital landscape, augmented reality (AR) mixes virtual information, like text or images, into your view of the real world in real-time.

In the last few years, AR has started appearing on smart phones. In that context, software superimposes information on top of your view of the world as seen through the device’s screen. But AR eyewear, which provides a more immersive experience, has been confined to academic research and niche applications like medical and military training. That’s been largely because older AR hardware has been so bulky and has cost tens of thousands of dollars.

The Wrap 920AR from Vuzix, based in Rochester, New York, costs $1,995—about half the price of other AR goggles with similar image resolution. The company hopes that the glasses will appeal to gamers, animators, architects, and software developers, and it has developed software for building AR environments, which is included with the glasses.

Wearing the 920AR means looking at the world through a pair of LCD video displays. The 920AR is heavier than a regular pair of glasses but far lighter than other head-mounted virtual-reality displays I’ve tried. The displays are connected to two video cameras that sit outside of the glasses in front of the eyes. The screens show each eye a slightly different view of the world, mimicking natural human vision, which allows for depth perception. Accelerometers, gyro sensors, and magnetometers track the direction in which the wearer is looking. The glasses also come with ports that let users plug it into an iPhone for portable power and controls, such as loading a particular AR object or environment.

The Vuzix software can recognize and track visual markers (like the black-and-white piece of cardboard I held), or lock onto a certain object or color (like the soda can). Tracking works well as long as the pattern or object being tracked are visible to the cameras; tilting a tracking pattern too far will cause the virtual image to flicker. By tracking head movements, the software can make sure that virtual objects are perfectly positioned atop the real world.

“There are other folks who make stereo, see-through eyewear, but there’s no one making anything near the price point of Vuzix’s,” says Steve Feiner, professor of computer science at Columbia University, and a lead AR researcher since the 1990s. Feiner says that the integration of cameras and motion sensors into the display makes the glasses less bulky.

Blair MacIntyre, an associate professor at the Georgia Institute of Technology who works on AR games, notes that most researchers and companies are focusing on smart phones. “Very few people have been making head-mounted displays [for consumers] since cell phones became powerful,” he says.

However, MacIntyre notes that AR glasses are still more practical than phones in many situations. “Anything tool-oriented—medical, military, maintenance repair—will require head-worn displays,” he says, because people’s hands need to be free to do such tasks. MacIntyre also points out that discovering information about the world using AR would require looking through a device constantly, which is too cumbersome to do with a phone.

For AR glasses to become really popular, MacIntyre says, they will need to get lighter and better looking, and there will need to be worthwhile applications. “No one’s going to pay even $100 if there’s no application,” he says. MacIntyre thinks gaming could be a killer app for AR, and he says business or social media applications may also be popular. The Vuzix glasses are “kind of an intermediate step,” he says. “There won’t be a million people buying them, but I do think it’s a lot closer to what we need than anything else has been.”

Ultimately, it may be practical to incorporate AR into glasses without a builky display, by superimposing an image on a lens using optical components. “Clear glasses are a very old idea that go back to the earliest days of AR,” says Feiner. But it is more difficult to track the image that a person sees, and to accurately superimpose virtual objects on a clear display. Optical displays also have difficulty competing with ambient light.

MacIntyre believes even those who do not normally wear glasses may eventually find AR glasses appealing. “Ten years ago, if I told you that people would wear a big thing on their ear that blinks, no one would imagine that,” he says, referring to Bluetooth headsets. “The value outweighed the lack of aesthetics and the awkwardness.”

Source | Technology Review