Archive for the ‘Augmented Reality’ Category

Smart contact lenses for health and head-up displays

Friday, January 14th, 2011

Eye strain? Triggerfish will know

Lenses that monitor eye health are on the way, and in-eye 3D image displays are being developed too – welcome to the world of augmented vision

THE next time you gaze deep into someone’s eyes, you might be shocked at what you see: tiny circuits ringing their irises, their pupils dancing with pinpricks of light. These smart contact lenses aren’t intended to improve vision. Instead, they will monitor blood sugar levels in people with diabetes or look for signs of glaucoma.

The lenses could also map images directly onto the field of view, creating head-up displays for the ultimate augmented reality experience, without wearing glasses or a headset. To produce such lenses, researchers are merging transparent, eye-friendly materials with microelectronics.

In 2008, as a proof of concept, Babak Parviz at the University of Washington in Seattle created a prototype contact lens containing a single red LED. Using the same technology, he has now created a lens capable of monitoring glucose levels in people with diabetes.

It works because glucose levels in tear fluid correspond directly to those found in the blood, making continuous measurement possible without the need for thumb pricks, he says. Parviz’s design calls for the contact lens to send this information wirelessly to a portable device worn by diabetics, allowing them to manage their diet and medication more accurately.

Lenses that also contain arrays of tiny LEDs may allow this or other types of digital information to be displayed directly to the wearer through the lens. This kind of augmented reality has already taken off in cellphones, with countless software apps superimposing digital data onto images of our surroundings, effectively blending the physical and online worlds.

Making it work on a contact lens won’t be easy, but the technology has begun to take shape. Last September, Sensimed, a Swiss spin-off from the Swiss Federal Institute of Technology in Lausanne, launched the very first commercial smart contact lens, designed to improve treatment for people with glaucoma.

The disease puts pressure on the optic nerve through fluid build-up, and can irreversibly damage vision if not properly treated. Highly sensitive platinum strain gauges embedded in Sensimed’s Triggerfish lens record changes in the curvature of the cornea, which correspond directly to the pressure inside the eye, says CEO Jean-Marc Wismer. The lens transmits this information wirelessly at regular intervals to a portable recording device worn by the patient, he says.

Like an RFID tag or London’s Oyster travel cards, the lens gets its power from a nearby loop antenna – in this case taped to the patient’s face. The powered antenna transmits electricity to the contact lens, which is used to interrogate the sensors, process the signals and transmit the readings back.

Each disposable contact lens is designed to be worn just once for 24 hours, and the patient repeats the process once or twice a year. This allows researchers to look for peaks in eye pressure which vary from patient to patient during the course of a day. This information is then used to schedule the timings of medication.

“The timing of these drugs is important,” Wisner says.

Parviz, however, has taken a different approach. His glucose sensor uses sets of electrodes to run tiny currents through the tear fluid and measures them to detect very small quantities of dissolved sugar. These electrodes, along with a computer chip that contains a radio frequency antenna, are fabricated on a flat substrate made of polyethylene terephthalate (PET), a transparent polymer commonly found in plastic bottles. This is then moulded into the shape of a contact lens to fit the eye.

Parviz plans to use a higher-powered antenna to get a better range, allowing patients to carry a single external device in their breast pocket or on their belt. Preliminary tests show that his sensors can accurately detect even very low glucose levels. Parvis is due to present his results later this month at the IEEE MEMS 2011 conference in Cancún, Mexico.

“There’s still a lot more testing we have to do,” says Parviz. In the meantime, his lab has made progress with contact lens displays. They have developed both red and blue miniature LEDs – leaving only green for full colour – and have separately built lenses with 3D optics that resemble the head-up visors used to view movies in 3D.

Parviz has yet to combine both the optics and the LEDs in the same contact lens, but he is confident that even images so close to the eye can be brought into focus. “You won’t necessarily have to shift your focus to see the image generated by the contact lens,” says Parviz. It will just appear in front of you, he says. The LEDs will be arranged in a grid pattern, and should not interfere with normal vision when the display is off.

For Sensimed, the circuitry is entirely around the edge of the lens (see photo). However, both have yet to address the fact that wearing these lenses might make you look like the robots in the Terminator movies. False irises could eventually solve this problem, says Parviz. “But that’s not something at the top of our priority list,” he says.

Source | New Scientist

Documentary Plug and Pray explores the promise, problems and ethics of robotics

Thursday, January 13th, 2011

Ray Kurzweil was interviewed in the recent documentary, Plug and Pray, exploring the promise and peril of advanced AI and the coming age of robotics.

The document deals with the ethics of robots. Creation of artificial intelligence was a fantastical idea that captured the minds of scientists (and science fiction writers) from the very start of the computer age.

But the breathtaking pace of technology has moved us ever closer to making it a reality. What is the future of AI., and where could it potentially lead us? Jens Schanze’s Plug & Pray is an engrossing journey into the preternatural, and sometimes grotesque, future of advanced computer life. We are invited into the laboratories and minds of technological experts from around the world as they make bold visions come true: the creation of machines that are equal to their human creators.

But then there is Joseph Weizenbaum, a pioneer of artificial intelligence and creator of the Eliza speech program. Playful and sardonic, he critically questions the scientific faith in technological supremacy and the notion of “plug and play.”

Plug & Pray doesn’t only explore the world in which computer science, robotics, neuroscience, and psychology merge, but the philosophical questions that we must explore along the way. You plug it in and it all works, Weizenbaum notes. Or sometimes it doesn’t.

Source | Kurzweil AI









2011 preview: Enter the robot self

Monday, January 3rd, 2011

Your new colleague

This could be the year when we quit dragging ourselves to work and send remote-controlled robot avatars instead

Why drag yourself to work through rush-hour traffic when you can stay at home and send a remote-controlled robot instead?

Firms in the US and Japan are already selling robot avatars that allow office workers to be in two places at once. So 2011 could be the year when many of us find ourselves sitting across the desk from an electronic colleague.

Californian company Willow Garage is developing a so-called telepresence robot called Texai, while Anybots, also in California, recently launched the QB office bot.

The QB, which looks like a small Segway vehicle with a robot head on top, can travel at 6 kilometres per hour, using a laser scanner to avoid books and other office clutter.

It can be controlled via a web browser from anywhere in the world and has camera eyes to allow you to navigate your robot’s surroundings and see who you are talking to. A small LCD screen on the head means your colleagues can see you too.

You could argue that if you were planning to talk to people in other offices you could just use a videoconferencing system rather than a $15,000 robot. But logging into a robot bodyMovie Camera allows people to move around in a relatively normal way, says Trevor Blackwell of Anybots.

“If you have a bunch of people who are all used to talking to each other wherever they want to, it is a bit of an imposition to say, ‘OK, from now on all conversations have to be in the videoconferencing room’.”

Talking to a robot colleague might feel strange at first, but people seem to get used to it quite quickly. “Someone recently came to the office asking for me, and a colleague told them they had just seen me,” says Blackwell. “But actually it was the robot they had seen. I was still at home.”

Source | New Scientist

Technology 25 Years Hence

Thursday, December 30th, 2010

“The law of accelerating returns is the only reliable method I know that allows us to forecast at least certain aspects of the future,” said Ray Kurzweil in “Why Do We Need Predictions?,” a New York Times special feature published Monday.

“A computer that fit inside a building when I was a student now fits in my pocket, and is a thousand times more powerful despite being a million times less expensive. In another quarter century, that capability will fit inside a red blood cell and will again be a billion times more powerful per dollar.”

Source | Kurzweil AI

The Year in Enhancing Reality

Tuesday, December 28th, 2010

More than real: These glasses provide augmented reality for $1,995.

2010 saw an explosion of 3-D products for consumers and also the arrival of augmented reality as a mainstream technology. In both areas, however, only some commercial implementations proved ready for prime time.

3-D TVs, Cameras, and Camcorders Galore

3-D was a hot topic at the start of the year, partly because of the 3-D blockbuster movie Avatar, which came out last December. Many predicted that 3-D technology would move quickly from the movie theater into the home, and major electronics companies including Panasonic, Mitsubishi, Sony, Philips, and Toshiba announced plans to release 3-D televisions and Blu-ray players (“Home 3-D: Here, or Hype?” and “Here Come the High-Definition 3-D TVs“). But obstacles—particularly the need to wear 3-D glasses costing upwards of $100 per pair and the limited amount of 3-D content available to watch (a handful of DVDs and few TV transmissions)—have prevented 3-D TVs from becoming wildly popular, at least for now (“Will 3-D Make the Jump from Theater to Living Room?“).

In an effort to make the technology more enticing, some companies are developing glasses-free 3-D displays. Each lens in a pair of 3-D glasses filters a different image, which fools the brain into responding as if to a three-dimensional image. To ditch the glasses, the display has to produce alternating images very rapidly, and the user has to sit in just the right place relative to the screen. While most people would prefer not to have to wear 3-D glasses, few will be happy with this constraint. Fortunately, Microsoft has figured out a way around the problem—a screen that detects the viewer’s position and shows different images to each eye. Although it’s still in the research stages, the technology will allow one or two people to see a 3-D image on a screen, regardless of where in the room they are sitting (“3-D Without the Glasses“).

Glasses-free 3-D technology may be more suitable for handheld devices, whose users typically view the screen from a particular position anyway. The first phone featuring this type of 3-D tech was released this year (“TR10: Mobile 3-D“). Nintendo sees a different market for it: last January, the company announced plans to release a glasses-free 3-D gaming system (“Nintendo Plans Glasses-Free 3D Console“), potentially as early as next year.

A few researchers are looking into the possible side effects of viewing 3-D (with glasses or without). A small number of moviegoers complain of headaches or eye strain after watching 3-D movies, and some scientists argue that viewing 3-D—which, after all, tricks the brain into seeing something that’s not there—is responsible. But more research is needed (“Is 3D Bad for You?“).

The Dawn of AR

Smart phones helped usher in another reality-altering technology this year, as many new augmented-reality games and mapping applications were released.

Companies including Layar, Wikitude, and Qualcomm released AR apps for smart phones. Users point a phone’s camera at something and see an image overlaid with floating information, such as directions, the names of buildings, historical photos, or restaurant reviews. One app even shows people’s latest Twitter and Facebook updates floating around their heads (“Augmented Identity“). Businesses are starting to get on board with AR apps for advertising (“Augmented Reality Lacks Bite for Marketers“).

Researchers have found other applications for AR (“Augmented-Reality Floor Tiling” and “Treating Cockroach Phobia With Augmented Reality“). Even big companies like GM have been experimenting with AR—for instance, to help improve car safety (“GM Develops Augmented Reality Windshield“). One collaborative AR experiment used sophisticated tracking to ensure that two people simultaneously saw the same virtual elements in physical space (“Collaborative Augmented Reality Makes Beautiful Music“).

Having to view the world through the screen of a smart phone is tiresome. So some companies are working to bring affordable, lightweight AR glasses to consumers. These could provide a more immersive AR experience (Augmented Reality Goggles“) and help AR become more common, and more useful.

Source | Technology Review

Pentagon Wants to Give Troops Terminator Vision

Monday, December 27th, 2010

No more will soldiers’ vision be limited to the socket-embedded spheres that God intended. The Pentagon now wants troops to see dangers lurking behind them in real time, and be able to tell if an object a kilometer away is a walking stick or an AK-47.

In a solicitation released today, Darpa, the Pentagon’s far-out research branch, unveiled the Soldier Centric Imaging via Computational Cameras effort, or SCENICC. Imagine a suite of cameras that digitally capture a kilometer-wide, 360-degree sphere, representing the image in 3-D (!) onto a wearable eyepiece.

You’d be able to literally see all around you, including behind yourself, and zooming in at will, creating a “stereoscopic/binocular system, simultaneously providing 10x zoom to both eyes.” And you would do this all hands-free, apparently by barking out or pre-programming a command (the solicitation leaves it up to a designer’s imagination) to adjust focus.

Then comes the Terminator-vision. Darpa wants the eyepiece to include “high-resolution computer-enhanced imagery as well as task-specific non-image data products such as mission data overlays, threat warnings/alerts, targeting assistance, etc.” Target identified: Sarah Connor… The “Full Sphere Awareness” tool will provide soldiers with “muzzle flash detection,” “projectile tracking” and “object recognition/labeling,” basically pointing key information out to them.

And an “integrated weapon sighting” function locks your gun on your target when acquired. That’s far beyond an app mounted on your rifle that keeps track of where your friendlies and enemies are.

The imaging wouldn’t just be limited to what any individual soldier sees. SCENICC envisions a “networked optical sensing capability” that fuses images taken from nodes worn by “collections of soldiers and/or unmanned vehicles.” The Warrior-Alpha drone overhead? Its full-motion video and still images would be sent into your eyepiece.

It also has to be ridiculously lightweight, weighing less than 700 grams for the entire system — including a battery powerful enough to “exceed 24 hours [usage] under normal conditions.” That’s about a pound and a half, maximum. The Army’s experimental ensemble of wearable gadgets weighs about eight pounds. And it is to SCENICC what your Roomba is to the T-1000.

Here’s how far advanced SCENICC is compared to bleeding-edge imaging and networking capabilities that the Army is currently developing. Right now, the Army’s asking three different companies — Raytheon, Rockwell Collins and General Dynamics — to build a wearable platform of digital maps, computers and radios, networked with one another. Soldiers would have warzone maps beamed onto helmet-mounted eyepieces.

The system, known as Nett Warrior, needs to weigh less than eight pounds, and it builds on a years-long and ultimately fruitless effort called Land Warrior. (One of the problems with Land Warrior is it was heavy and cumbersome, owing in part to battery weight.) The Army hopes to choose one of the Nett Warrior designs by March.

By the time it’ll actually roll out Nett Warrior after testing, production and deployment — a few years, optimistically — SCENICC will already be hard at work on its replacement. Darpa wants a hands-free zooming function within two years of work on the contract. By year three, the computer-enhanced vision tool needs to be ready. Year four is for 360-degree vision. Then it’s on to development.

The Army is generally hot for combat-ready smartphones to keep soldiers linked up with each other. And the buzz-generating tool for the soldier of the near future is mapping technology, delivered onto a smartphone or some other handheld mobile device, at least judging from this year’s Association of the U.S. Army confab.

But all of these representation tools are two-dimensional, and require soldiers to look away from their patrols in order to use them. Textron’s SoldierEyes Common Operating Picture, for instance, lets soldiers see icons on a tablet-mounted map telling them where their friends, enemies and neutrals are. It can’t put those icons onto a 3-D picture sent to a soldier’s eyes, let alone allow a 10x zoom for a kilo-wide 360-degree field of vision. Why would anyone use a map on a smartphone when they could have SCENICC?

Even with all the advances in digital imaging, it’ll be a tall order to put together 360-degree vision and 10x zoom and mapping software and integration with weapons systems and lightweight miniaturization and network connectivity.

Darpa doesn’t really address how the system’s networked optics would work in low-bandwidth areas like, say, eastern Afghanistan (though maybe drone-borne cell towers can help).

Indeed, judging from the solicitation, while SCENICC is supposed to be networked, it doesn’t seem to have any communications requirements for soldiers to talk through what their optics are sharing with each other. Maybe there’s a role for those new soldier smartphones after all.

Source | Wired

‘Wearable robot’ arm improves performance of brain-controlled device

Saturday, December 18th, 2010

Aided by a robotic exoskeleton, a monkey can hit the target faster and more directly (Hatsopoulos, et al. The Journal of Neuroscience)

The performance of a brain-machine interface designed to help paralyzed subjects move objects with their thoughts is improved with the addition of a robotic arm that provides sensory feedback, a new study from the University of Chicago finds.

Devices that translate brain activity into the movement of a computer cursor or an external robotic arm have already proven successful in humans. But in these early systems, vision was the only tool a subject could use to help control the motion.

Adding a robot arm that provided kinesthetic information about movement and position in space improved the performance of monkeys using a brain-machine interface in a study published today in The Journal of Neuroscience. Incorporating this sense may improve the design of “wearable robots” to help patients with spinal cord injuries, researchers said.

“A lot of patients that are motor-disabled might have partial sensory feedback,” said Nicholas Hatsopoulos, PhD, Associate Professor and Chair of Computational Neuroscience at the University of Chicago. “That got us thinking that maybe we could use this natural form of feedback with wearable robots to provide that kind of feedback.”

In the experiments, monkeys controlled a cursor without actively moving their arm via a device that translated activity in the primary motor cortex of their brain into cursor motion. While wearing a sleeve-like robotic exoskeleton that moved their arm in tandem with the cursor, the monkey’s control of the cursor improved, hitting targets faster and via straighter paths than without the exoskeleton.

“We saw a 40 percent improvement in cursor control when the robotic exoskeleton passively moved the monkeys’ arm,” Hatsopoulos said. “This could be quite significant for daily activities being performed by a paralyzed patient that was equipped with such a system.”

When a person moves their arm or hand, they use sensory feedback called proprioception to control that motion. For example, if one reaches out to grab a coffee mug, sensory neurons in the arm and hand send information back to the brain about where one’s limbs are positioned and moving. Proprioception tells a person where their arm is positioned, even if their eyes are closed.

But in patients with conditions where sensory neurons die out, executing basic motor tasks such as buttoning a shirt or even walking becomes exceptionally difficult. Paraplegic subjects in the early clinical trials of brain-machine interfaces faced similar difficulty in attempting to move a computer cursor or robot arm using only visual cues. Those troubles helped researchers realize the importance of proprioception feedback, Hatsopoulos said.

“In the early days when we were doing this, we didn’t even consider sensory feedback as an important component of the system,” Hatsopoulos said. “We really thought it was just one-way: signals were coming from the brain, and then out to control the limb. It’s only more recently that the community has really realized that there is this loop with feedback coming back.”

Reflecting this loop, the researchers on the new study also observed changes in the brain activity recorded from the monkeys when sensory feedback was added to the set-up. With proprioception feedback, the information in the cell firing patterns of the primary motor cortex contained more information than in trials with only visual feedback, Hatsopoulos said, reflecting an improved signal-to-noise ratio.

Wearable robots

The improvement seen from adding proprioception feedback may inform the next generation of brain-machine interface devices, Hatsopoulos said. Already, scientists are developing different types of “wearable robots” to augment a person’s natural abilities. Combining a decoder of cortical activity with a robotic exoskeleton for the arm or hand can serve a dual purpose: allowing a paralyzed subject to move the limb, while also providing sensory feedback.

To benefit from this solution, a paralyzed patient must have retained some residual sensory information from the limbs despite the loss of motor function – a common occurrence, Hatsopoulos said, particularly in patients with ALS, locked-in syndrome, or incomplete spinal cord injury. For patients without both motor and sensory function, direct stimulation of sensory cortex may be able to simulate the sensation of limb movement. Further research in that direction is currently underway, Hatsopoulos said.

“I think all the components are there; there’s nothing here that’s holding us back conceptually,” Hatsopoulos said. “I think using these wearable robots and controlling them with the brain is, in my opinion, probably the most promising approach to take in helping paralyzed individuals regain the ability to move.”

The paper, “Incorporating feedback from multiple sensory modalities enhances brain-machine interface control,” appears in the Dec. 15 issue of The Journal of Neuroscience. Authors on the paper are Aaron J. Suminski, Dennis C. Tkach, and Hatsopoulos of the University of Chicago, and Andrew H. Fagg of the University of Oklahoma.

Funding for the research was provided by the National Institute of Neurological Disorders and Stroke and the Paralyzed Veterans of America Research Foundation.

Source | University of Chicago

Word Lens Translates Words Inside of Images

Saturday, December 18th, 2010

Ever been confused at a restaurant in a foreign country and wish you could just scan your menu with your iPhone and get an instant translation? Well as of today you are one step closer thanks to Word Lens from QuestVisual.

The iPhone app, which hit iTunes last night,  is the culmination of 2 1/2 years of work from founders Otavio Good and John DeWeese. The paid app, which currently offers only English to Spanish and Spanish to English translation for $4.99, uses Optical Character Recognition technology to execute something which might as well be magic. This is what the future, literally, looks like.

Founder Good explains the app’s process simply, “It tries to find out what the letters are and then looks in the dictionary. Then it draws the words back on the screen in translation.” Right now the app is mostly word for word translation, useful if you’re looking to get the gist of something like a dish on a menu or what a road sign says.

At the moment the only existing services even remotely like this are Pleco, a Chinese learning app and a feature on Google Goggles where you can snap a stillshot and send that in for translation. Word Lens is currently self-funded.

Good says that the obvious steps for Word Lens’ future is to get more languages in. He’s planning on incorporating major European languages and is also thinking about other potential uses including a reader for the blind, “I wouldn’t be surprised if we did French next, Italian and since my mom is Brazilian, Portuguese.”

Says Good, modestly, “The translation isn’t perfect, but it gets the point across.” You can try it out for yourself here.





Source | TechCrunch

Top 5 Human Enhancement Must Haves

Tuesday, December 7th, 2010

The World Health Organization estimates that 25 million people worldwide are affected by over 1,000 genetic conditions.  There are approximately 24.6 million people alive today that have been diagnosed with cancer within the last five years. In the United States alone, 101,000 people are currently waiting for an organ transplant, and the number grows 300 people each month, according the Mayo Clinic.

Human intelligence has not advanced at the speed of accelerating technologies.  What can we do to advance our own human physiology?  The following is a list of possible transhuman must haves for the 21st century:

1.  Brain Enhancement: Metabrain prosthetic, which includes
an observational feedback field
•  AGI decision assistant
•  cognitive error correction task pane with auto-correct options
•  multiple viewpoints window with drop down elements

2. Body Enhancement:
Whole-Body prosthetics, which includes:
•  In vivo fiber optic spine
•  Atmospheric physical stimuli sensors
•  Solar protective nanoskin
•  Regenerative organs
•  Exoskeleton mobility device
•  Replaceable genes

3. Behavior Enhancement: Psychology prosthetic, which includes:
•  Awareness Customization
•  Connectivity macros
•  Empathic ping layers
•  Finessed emotions helper
•  Persona multiplier and recorder
•  Seamless relay between platforms, avatars and syn-bios

4.  Style Enhancement:  Aesthetics prosthetic, which includes:
•  Wearable or in-system options
•  Transhuman haute couture clipboard
•  Radical style click and drag option
•  Day-to-night shape shifting
•  Customized tone, texture and hue change

5.  System Care:  Warranty prosthetic, which includes:
•  Additional 24 chromosomal pairing
•  Guarantee for genetic or code mutations or defects
•  Upgradable immune system and anti-virus system

Natasha Vita-More is a media artist/designer, Founder and Director of Transhumanist Arts & Culture, and Artistic Director of H+ Laboratory.

Source | H+ Magazine

A Step Towards Idoru?

Thursday, November 25th, 2010

Pop princess Hatsune Miku is storming the music scene.

With her long cerulean pigtails and her part-schoolgirl, part-spy outfit, she’s easy on the eyes. Yes, her voice sounds like it might have gone through a little –- OK, a lot –- of studio magic. Legions of screaming fans and the requisite fan sites? She’s got ‘em.

And, like many of her hot young singer peers, Miku is extremely, proudly fake. Like, 3-D hologram fake.

Miku is a singing, digital avatar created by Crypton Future Media that customers can purchase and then program to perform any song on a computer.

Crypton uses voices recorded by actors and runs them through Yamaha Corp.’s Vocaloid software -– marketed as “a singer in a box.” The result: A synthesized songstress that sounds far better than you ever have in your shower.

Crypton has even set up a record label called KarenT, with its own YouTube channel. The Vocaloidism blog has more details about the software.

A few months ago, a 3-D projection of Miku pranced around several stadium stages as part of a concert tour, where capacity crowds waved their glow sticks and sang along.  Here’s the starlet performing a jingle titled, appropriately, “World Is Mine.”




The Blu-ray and DVD recordings of those events were recently released, according to SingularityHub, which also has more videos.

The virtual diva’s albums have also topped the Japanese charts. She’s on Facebook. We’ve seen living, breathing musicians at the Hollywood Bowl get less love.

It all reminds us a bit of S1m0ne. Remember her? She’s the sultry actress who captivated adoring audiences in the eponymous 2002 film. She was also completely computer-generated by Al Pacino’s character.

Somewhere, we bet she’s a little bit jealous.

Source | New York Times

With Kinect Controller, Hackers Take Liberties

Thursday, November 25th, 2010

When Oliver Kreylos, a computer scientist, heard about the capabilities of Microsoft’s new Kinect gaming device, he couldn’t wait to get his hands on it. “I dropped everything, rode my bike to the closest game store and bought one,” he said.

But he had no interest in playing video games with the Kinect, which is meant to be plugged into an Xbox and allows players to control the action onscreen by moving their bodies.

Mr. Kreylos, who specializes in virtual reality and 3-D graphics, had just learned that he could download some software and use the device with his computer instead. He was soon using it to create “holographic” video images that can be rotated on a computer screen. A video he posted on YouTube last week caused jaws to drop and has been watched 1.3 million times.

Mr. Kreylos is part of a crowd of programmers, roboticists and tinkerers who are getting the Kinect to do things it was not really meant to do. The attraction of the device is that it is outfitted with cameras, sensors and software that let it detect movement, depth, and the shape and position of the human body.

Companies respond to this kind of experimentation with their products in different ways — and Microsoft has had two very different responses since the Kinect was released on Nov. 4. It initially made vague threats about working with law enforcement to stop “product tampering.” But by last week, it was embracing the benevolent hackers.

“Anytime there is engagement and excitement around our technology, we see that as a good thing,” said Craig Davidson, senior director for Xbox Live at Microsoft. “It’s naïve to think that any new technology that comes out won’t have a group that tinkers with it.”

Microsoft and other companies would be wise to keep an eye on this kind of outside innovation and consider wrapping some of the creative advances into future products, said Loren Johnson, an analyst at Frost & Sullivan who follows digital media and consumer electronics.

“These adaptations could be a great benefit to their own bottom line,” he said. “It’s a trend that is undeniable, using public resources to improve on products, whether it be the Kinect or anything else.”

Microsoft invested hundreds of millions of dollars in Kinect in the hopes of wooing a broader audience of gamers, like those who enjoy using the motion-based controllers of the Nintendo Wii.

Word of the technical sophistication and low price of the device spread quickly in tech circles.

Building a device with the Kinect’s capabilities would require “thousands of dollars, multiple Ph.D.’s and dozens of months,” said Limor Fried, an engineer and founder of Adafruit Industries, a store in New York that sells supplies for experimental hardware projects. “You can just buy this at any game store for $150.”

On the day the Kinect went on sale, Ms. Fried and Phillip Torrone, a designer and senior editor of Make magazine, which features do-it-yourself technology projects, announced a $3,000 cash bounty for anyone who created and released free software allowing the Kinect to be used with a computer instead of an Xbox.

Microsoft quickly gave the contest a thumbs-down. In an interview with CNet News, a company representative said that it did not “condone the modification of its products” and that it would “work closely with law enforcement and product safety groups to keep Kinect tamper-resistant.”

That is not much different from the approach taken by Apple, which has released software upgrades for its iPhone operating system in an effort to block any unsanctioned hacks or software running on its devices.

But other companies whose products have been popular targets for tinkering have actively encouraged it. One example is iRobot, the company that makes the Roomba, a small robotic vacuum cleaner. That product was so popular with robotics enthusiasts that the company began selling the iRobot Create, a programmable machine with no dusting capabilities.

Mr. Davidson said Microsoft now had no concerns about the Kinect-hacking fan club, but he said the company would be monitoring developments. A modification that compromises the Xbox system, violates the company’s terms of service or “degrades the experience for everyone is not something we want,” he said.

Other creative uses of the Kinect involve drawing 3-D doodles in the air and then rotating them with a nudge of the hand, and manipulating colorful animated puppets on a computer screen. Most, if not all, of the prototypes were built using the open-source code released as a result of the contest sponsored by Ms. Fried and Mr. Torrone, which was won by Hector Martin, a 20-year-old engineering student in Spain.

The KinectBot, cobbled together in a weekend by Philipp Robbel, a Ph.D. candidate at the Massachusetts Institute of Technology, combines the Kinect and an iRobot Create. It uses the Kinect’s sensors to detect humans, respond to gesture and voice commands, and generate 3-D maps of what it is seeing as it rolls through a room.

Mr. Robbel said the KinectBot offered a small glimpse into the future of machines that could aid in the search for survivors after a natural disaster.

“This is only the tip of the iceberg,” he said of the wave of Kinect experimentation. “We are going to see an exponential number of videos and tests over the coming weeks and months as more people get their hands on this device.”

Toying around with the Kinect could go beyond being a weekend hobby. It could potentially lead to a job. In late 2007, Johnny Lee, then a graduate student at Carnegie Mellon, was so taken by the Wii that he rigged a system that would allow it to track his head movements and adjust the screen perspective accordingly.

A video of Mr. Lee demonstrating the technology was a hit on YouTube, as were his videos of other Wii-related projects. By June 2008, he had a job at Microsoft as part of the core team working on the Kinect software that distinguishes between players and parts of the body.

“The Wii videos made me much more visible to the products people at Xbox,” Mr. Lee said. “They were that much more interested in me because of the videos.”

Mr. Lee said he was “very happy” to see the response the Kinect was getting among people much like himself. “I’m glad they are inspired and that they like the technology,” he said. “I think they’ll be able to do really cool things with it.”

Source | New York Times

In Cybertherapy, Avatars Assist With Healing

Thursday, November 25th, 2010

Advances in artificial intelligence and computer modeling are allowing therapists to practice “cybertherapy” more effectively, using virtual environments to help people work through phobias, like a fear of heights or of public spaces.

Researchers are populating digital worlds with autonomous, virtual humans that can evoke the same tensions as in real-life encounters. People with social anxiety are struck dumb when asked questions by a virtual stranger. Heavy drinkers feel strong urges to order something from a virtual bartender, while gamblers are drawn to sit down and join a group playing on virtual slot machines.

In a recent study,  researchers at USC found that a virtual confidant elicits from people the crucial first element in any therapy: self-disclosure.The researchers are incorporating the techniques learned from this research into a virtual agent being developed for the Army, called SimCoach. Guided by language-recognition software, SimCoach — there are several versions, male and female, young and older, white and black — appears on a computer screen and can conduct a rudimentary interview, gently probing for possible mental troubles.

And research at the University of Quebec suggests where virtual humans are headed: realistic three-dimensional forms that can be designed to resemble people in the real world.

Source | New York Times

Growing Up Digital, Wired for Distraction

Wednesday, November 24th, 2010

By all rights, Vishal, a bright 17-year-old, should already have finished the book, Kurt Vonnegut’s “Cat’s Cradle,” his summer reading assignment. But he has managed 43 pages in two months.

He typically favors Facebook, YouTube and making digital videos. That is the case this August afternoon. Bypassing Vonnegut, he clicks over to YouTube, meaning that tomorrow he will enter his senior year of high school hoping to see an improvement in his grades, but without having completed his only summer homework.

On YouTube, “you can get a whole story in six minutes,” he explains. “A book takes so long. I prefer the immediate gratification.”

Students have always faced distractions and time-wasters. But computers and cellphones, and the constant stream of stimuli they offer, pose a profound new challenge to focusing and learning.

Researchers say the lure of these technologies, while it affects adults too, is particularly powerful for young people. The risk, they say, is that developing brains can become more easily habituated than adult brains to constantly switching tasks — and less able to sustain attention.

“Their brains are rewarded not for staying on task but for jumping to the next thing,” said Michael Rich, an associate professor at Harvard Medical School and executive director of the Center on Media and Child Health in Boston. And the effects could linger: “The worry is we’re raising a generation of kids in front of screens whose brains are going to be wired differently.”

But even as some parents and educators express unease about students’ digital diets, they are intensifying efforts to use technology in the classroom, seeing it as a way to connect with students and give them essential skills. Across the country, schools are equipping themselves with computers, Internet access and mobile devices so they can teach on the students’ technological territory.

It is a tension on vivid display at Vishal’s school, Woodside High School, on a sprawling campus set against the forested hills of Silicon Valley. Here, as elsewhere, it is not uncommon for students to send hundreds of text messages a day or spend hours playing video games, and virtually everyone is on Facebook.

The principal, David Reilly, 37, a former musician who says he sympathizes when young people feel disenfranchised, is determined to engage these 21st-century students. He has asked teachers to build Web sites to communicate with students, introduced popular classes on using digital tools to record music, secured funding for iPads to teach Mandarin and obtained $3 million in grants for a multimedia center.

He pushed first period back an hour, to 9 a.m., because students were showing up bleary-eyed, at least in part because they were up late on their computers. Unchecked use of digital devices, he says, can create a culture in which students are addicted to the virtual world and lost in it.

“I am trying to take back their attention from their BlackBerrys and video games,” he says. “To a degree, I’m using technology to do it.”

The same tension surfaces in Vishal, whose ability to be distracted by computers is rivaled by his proficiency with them. At the beginning of his junior year, he discovered a passion for filmmaking and made a name for himself among friends and teachers with his storytelling in videos made with digital cameras and editing software.

He acts as his family’s tech-support expert, helping his father, Satendra, a lab manager, retrieve lost documents on the computer, and his mother, Indra, a security manager at the San Francisco airport, build her own Web site.

But he also plays video games 10 hours a week. He regularly sends Facebook status updates at 2 a.m., even on school nights, and has such a reputation for distributing links to videos that his best friend calls him a “YouTube bully.”

Several teachers call Vishal one of their brightest students, and they wonder why things are not adding up. Last semester, his grade point average was 2.3 after a D-plus in English and an F in Algebra II. He got an A in film critique.

“He’s a kid caught between two worlds,” said Mr. Reilly — one that is virtual and one with real-life demands.

Vishal, like his mother, says he lacks the self-control to favor schoolwork over the computer. She sat him down a few weeks before school started and told him that, while she respected his passion for film and his technical skills, he had to use them productively.

“This is the year,” she says she told him. “This is your senior year and you can’t afford not to focus.”

It was not always this way. As a child, Vishal had a tendency to procrastinate, but nothing like this. Something changed him.

Growing Up With Gadgets

When he was 3, Vishal moved with his parents and older brother to their current home, a three-bedroom house in the working-class section of Redwood City, a suburb in Silicon Valley that is more diverse than some of its elite neighbors.

Thin and quiet with a shy smile, Vishal passed the admissions test for a prestigious public elementary and middle school. Until sixth grade, he focused on homework, regularly going to the house of a good friend to study with him.

But Vishal and his family say two things changed around the seventh grade: his mother went back to work, and he got a computer. He became increasingly engrossed in games and surfing the Internet, finding an easy outlet for what he describes as an inclination to procrastinate.

“I realized there were choices,” Vishal recalls. “Homework wasn’t the only option.”

Several recent studies show that young people tend to use home computers for entertainment, not learning, and that this can hurt school performance, particularly in low-income families. Jacob L. Vigdor, an economics professor at Duke University who led some of the research, said that when adults were not supervising computer use, children “are left to their own devices, and the impetus isn’t to do homework but play around.”

Research also shows that students often juggle homework and entertainment. The Kaiser Family Foundation found earlier this year that half of students from 8 to 18 are using the Internet, watching TV or using some other form of media either “most” (31 percent) or “some” (25 percent) of the time that they are doing homework.

At Woodside, as elsewhere, students’ use of technology is not uniform. Mr. Reilly, the principal, says their choices tend to reflect their personalities. Social butterflies tend to be heavy texters and Facebook users. Students who are less social might escape into games, while drifters or those prone to procrastination, like Vishal, might surf the Web or watch videos.

The technology has created on campuses a new set of social types — not the thespian and the jock but the texter and gamer, Facebook addict and YouTube potato.

“The technology amplifies whoever you are,” Mr. Reilly says.

For some, the amplification is intense. Allison Miller, 14, sends and receives 27,000 texts in a month, her fingers clicking at a blistering pace as she carries on as many as seven text conversations at a time. She texts between classes, at the moment soccer practice ends, while being driven to and from school and, often, while studying.

Most of the exchanges are little more than quick greetings, but they can get more in-depth, like “if someone tells you about a drama going on with someone,” Allison said. “I can text one person while talking on the phone to someone else.”

But this proficiency comes at a cost: she blames multitasking for the three B’s on her recent progress report.

“I’ll be reading a book for homework and I’ll get a text message and pause my reading and put down the book, pick up the phone to reply to the text message, and then 20 minutes later realize, ‘Oh, I forgot to do my homework.’ ”

Some shyer students do not socialize through technology — they recede into it. Ramon Ochoa-Lopez, 14, an introvert, plays six hours of video games on weekdays and more on weekends, leaving homework to be done in the bathroom before school.

Escaping into games can also salve teenagers’ age-old desire for some control in their chaotic lives. “It’s a way for me to separate myself,” Ramon says. “If there’s an argument between my mom and one of my brothers, I’ll just go to my room and start playing video games and escape.”

With powerful new cellphones, the interactive experience can go everywhere. Between classes at Woodside or at lunch, when use of personal devices is permitted, students gather in clusters, sometimes chatting face to face, sometimes half-involved in a conversation while texting someone across the teeming quad. Others sit alone, watching a video, listening to music or updating Facebook.

Students say that their parents, worried about the distractions, try to police computer time, but that monitoring the use of cellphones is difficult. Parents may also want to be able to call their children at any time, so taking the phone away is not always an option.

Other parents wholly embrace computer use, even when it has no obvious educational benefit.

“If you’re not on top of technology, you’re not going to be on top of the world,” said John McMullen, 56, a retired criminal investigator whose son, Sean, is one of five friends in the group Vishal joins for lunch each day.

Sean’s favorite medium is video games; he plays for four hours after school and twice that on weekends. He was playing more but found his habit pulling his grade point average below 3.2, the point at which he felt comfortable. He says he sometimes wishes that his parents would force him to quit playing and study, because he finds it hard to quit when given the choice. Still, he says, video games are not responsible for his lack of focus, asserting that in another era he would have been distracted by TV or something else.

“Video games don’t make the hole; they fill it,” says Sean, sitting at a picnic table in the quad, where he is surrounded by a multimillion-dollar view: on the nearby hills are the evergreens that tower above the affluent neighborhoods populated by Internet tycoons. Sean, a senior, concedes that video games take a physical toll: “I haven’t done exercise since my sophomore year. But that doesn’t seem like a big deal. I still look the same.”

Sam Crocker, Vishal’s closest friend, who has straight A’s but lower SAT scores than he would like, blames the Internet’s distractions for his inability to finish either of his two summer reading books.

“I know I can read a book, but then I’m up and checking Facebook,” he says, adding: “Facebook is amazing because it feels like you’re doing something and you’re not doing anything. It’s the absence of doing something, but you feel gratified anyway.”

He concludes: “My attention span is getting worse.”

The Lure of Distraction

Some neuroscientists have been studying people like Sam and Vishal. They have begun to understand what happens to the brains of young people who are constantly online and in touch.

In an experiment at the German Sport University in Cologne in 2007, boys from 12 to 14 spent an hour each night playing video games after they finished homework.

On alternate nights, the boys spent an hour watching an exciting movie, like “Harry Potter” or “Star Trek,” rather than playing video games. That allowed the researchers to compare the effect of video games and TV.

The researchers looked at how the use of these media affected the boys’ brainwave patterns while sleeping and their ability to remember their homework in the subsequent days. They found that playing video games led to markedly lower sleep quality than watching TV, and also led to a “significant decline” in the boys’ ability to remember vocabulary words. The findings were published in the journal Pediatrics.

Markus Dworak, a researcher who led the study and is now a neuroscientist at Harvard, said it was not clear whether the boys’ learning suffered because sleep was disrupted or, as he speculates, also because the intensity of the game experience overrode the brain’s recording of the vocabulary.

“When you look at vocabulary and look at huge stimulus after that, your brain has to decide which information to store,” he said. “Your brain might favor the emotionally stimulating information over the vocabulary.”

At the University of California, San Francisco, scientists have found that when rats have a new experience, like exploring an unfamiliar area, their brains show new patterns of activity. But only when the rats take a break from their exploration do they process those patterns in a way that seems to create a persistent memory.

In that vein, recent imaging studies of people have found that major cross sections of the brain become surprisingly active during downtime. These brain studies suggest to researchers that periods of rest are critical in allowing the brain to synthesize information, make connections between ideas and even develop the sense of self.

Researchers say these studies have particular implications for young people, whose brains have more trouble focusing and setting priorities.

“Downtime is to the brain what sleep is to the body,” said Dr. Rich of Harvard Medical School. “But kids are in a constant mode of stimulation.”

“The headline is: bring back boredom,” added Dr. Rich, who last month gave a speech to the American Academy of Pediatrics entitled, “Finding Huck Finn: Reclaiming Childhood from the River of Electronic Screens.”

Dr. Rich said in an interview that he was not suggesting young people should toss out their devices, but rather that they embrace a more balanced approach to what he said were powerful tools necessary to compete and succeed in modern life.

The heavy use of devices also worries Daniel Anderson, a professor of psychology at the University of Massachusetts at Amherst, who is known for research showing that children are not as harmed by TV viewing as some researchers have suggested.

Multitasking using ubiquitous, interactive and highly stimulating computers and phones, Professor Anderson says, appears to have a more powerful effect than TV.

Like Dr. Rich, he says he believes that young, developing brains are becoming habituated to distraction and to switching tasks, not to focus.

“If you’ve grown up processing multiple media, that’s exactly the mode you’re going to fall into when put in that environment — you develop a need for that stimulation,” he said.

Vishal can attest to that.

“I’m doing Facebook, YouTube, having a conversation or two with a friend, listening to music at the same time. I’m doing a million things at once, like a lot of people my age,” he says. “Sometimes I’ll say: I need to stop this and do my schoolwork, but I can’t.”

“If it weren’t for the Internet, I’d focus more on school and be doing better academically,” he says. But thanks to the Internet, he says, he has discovered and pursued his passion: filmmaking. Without the Internet, “I also wouldn’t know what I want to do with my life.”

Clicking Toward a Future

The woman sits in a cemetery at dusk, sobbing. Behind her, silhouetted and translucent, a man kneels, then fades away, a ghost.

This captivating image appears on Vishal’s computer screen. On this Thursday afternoon in late September, he is engrossed in scenes he shot the previous weekend for a music video he is making with his cousin.

The video is based on a song performed by the band Guns N’ Roses about a woman whose boyfriend dies. He wants it to be part of the package of work he submits to colleges that emphasize film study, along with a documentary he is making about home-schooled students.

Now comes the editing. Vishal taught himself to use sophisticated editing software in part by watching tutorials on YouTube. He does not leave his chair for more than two hours, sipping Pepsi, his face often inches from the screen, as he perfects the clip from the cemetery. The image of the crying woman was shot separately from the image of the kneeling man, and he is trying to fuse them.

“I’m spending two hours to get a few seconds just right,” he says.

He occasionally sends a text message or checks Facebook, but he is focused in a way he rarely is when doing homework. He says the chief difference is that filmmaking feels applicable to his chosen future, and he hopes colleges, like the University of Southern California or the California Institute of the Arts in Los Angeles, will be so impressed by his portfolio that they will overlook his school performance.

“This is going to compensate for the grades,” he says. On this day, his homework includes a worksheet for Latin, some reading for English class and an economics essay, but they can wait.

For Vishal, there’s another clear difference between filmmaking and homework: interactivity. As he edits, the windows on the screen come alive; every few seconds, he clicks the mouse to make tiny changes to the lighting and flow of the images, and the software gives him constant feedback.

“I click and something happens,” he says, explaining that, by comparison, reading a book or doing homework is less exciting. “I guess it goes back to the immediate gratification thing.”

The $2,000 computer Vishal is using is state of the art and only a week old. It represents a concession by his parents. They allowed him to buy it, despite their continuing concerns about his technology habits, because they wanted to support his filmmaking dream. “If we put roadblocks in his way, he’s just going to get depressed,” his mother says. Besides, she adds, “he’s been making an effort to do his homework.”

At this point in the semester, it seems she is right. The first schoolwide progress reports come out in late September, and Vishal has mostly A’s and B’s. He says he has been able to make headway by applying himself, but also by cutting back his workload. Unlike last year, he is not taking advanced placement classes, and he has chosen to retake Algebra II not in the classroom but in an online class that lets him work at his own pace.

His shift to easier classes might not please college admissions officers, according to Woodside’s college adviser, Zorina Matavulj. She says they want seniors to intensify their efforts. As it is, she says, even if Vishal improves his performance significantly, someone with his grades faces long odds in applying to the kinds of colleges he aspires to.

Still, Vishal’s passion for film reinforces for Mr. Reilly, the principal, that the way to reach these students is on their own terms.

Hands-On Technology

Big Macintosh monitors sit on every desk, and a man with hip glasses and an easygoing style stands at the front of the class. He is Geoff Diesel, 40, a favorite teacher here at Woodside who has taught English and film. Now he teaches one of Mr. Reilly’s new classes, audio production. He has a rapt audience of more than 20 students as he shows a video of the band Nirvana mixing their music, then holds up a music keyboard.

“Who knows how to use Pro Tools? We’ve got it. It’s the program used by the best music studios in the world,” he says.

In the back of the room, Mr. Reilly watches, thrilled. He introduced the audio course last year and enough students signed up to fill four classes. (He could barely pull together one class when he introduced Mandarin, even though he had secured iPads to help teach the language.)

“Some of these students are our most at-risk kids,” he says. He means that they are more likely to tune out school, skip class or not do their homework, and that they may not get healthful meals at home. They may also do their most enthusiastic writing not for class but in text messages and on Facebook. “They’re here, they’re in class, they’re listening.”

Despite Woodside High’s affluent setting, about 40 percent of its 1,800 students come from low-income families and receive a reduced-cost or free lunch. The school is 56 percent Latino, 38 percent white and 5 percent African-American, and it sends 93 percent of its students to four-year or community colleges.

Mr. Reilly says that the audio class provides solid vocational training and can get students interested in other subjects.

“Today mixing music, tomorrow sound waves and physics,” he says. And he thinks the key is that they love not just the music but getting their hands on the technology. “We’re meeting them on their turf.”

It does not mean he sees technology as a panacea. “I’ll always take one great teacher in a cave over a dozen Smart Boards,” he says, referring to the high-tech teaching displays used in many schools.

Teachers at Woodside commonly blame technology for students’ struggles to concentrate, but they are divided over whether embracing computers is the right solution.

“It’s a catastrophe,” said Alan Eaton, a charismatic Latin teacher. He says that technology has led to a “balkanization of their focus and duration of stamina,” and that schools make the problem worse when they adopt the technology.

“When rock ’n’ roll came about, we didn’t start using it in classrooms like we’re doing with technology,” he says. He personally feels the sting, since his advanced classes have one-third as many students as they had a decade ago.

Vishal remains a Latin student, one whom Mr. Eaton describes as particularly bright. But the teacher wonders if technology might be the reason Vishal seems to lose interest in academics the minute he leaves class.

Mr. Diesel, by contrast, does not think technology is behind the problems of Vishal and his schoolmates — in fact, he thinks it is the key to connecting with them, and an essential tool. “It’s in their DNA to look at screens,” he asserts. And he offers another analogy to explain his approach: “Frankenstein is in the room and I don’t want him to tear me apart. If I’m not using technology, I lose them completely.”

Mr. Diesel had Vishal as a student in cinema class and describes him as a “breath of fresh air” with a gift for filmmaking. Mr. Diesel says he wonders if Vishal is a bit like Woody Allen, talented but not interested in being part of the system.

But Mr. Diesel adds: “If Vishal’s going to be an independent filmmaker, he’s got to read Vonnegut. If you’re going to write scripts, you’ve got to read.”

Back to Reading Aloud

Vishal sits near the back of English IV. Marcia Blondel, a veteran teacher, asks the students to open the book they are studying, “The Things They Carried,” which is about the Vietnam War.

“Who wants to read starting in the middle of Page 137?” she asks. One student begins to read aloud, and the rest follow along.

To Ms. Blondel, the exercise in group reading represents a regression in American education and an indictment of technology. The reason she has to do it, she says, is that students now lack the attention span to read the assignments on their own.

“How can you have a discussion in class?” she complains, arguing that she has seen a considerable change in recent years. In some classes she can count on little more than one-third of the students to read a 30-page homework assignment.

She adds: “You can’t become a good writer by watching YouTube, texting and e-mailing a bunch of abbreviations.”

As the group-reading effort winds down, she says gently: “I hope this will motivate you to read on your own.”

It is a reminder of the choices that have followed the students through the semester: computer or homework? Immediate gratification or investing in the future?

Mr. Reilly hopes that the two can meet — that computers can be combined with education to better engage students and can give them technical skills without compromising deep analytical thought.

But in Vishal’s case, computers and schoolwork seem more and more to be mutually exclusive. Ms. Blondel says that Vishal, after a decent start to the school year, has fallen into bad habits. In October, he turned in weeks late, for example, a short essay based on the first few chapters of “The Things They Carried.” His grade at that point, she says, tracks around a D.

For his part, Vishal says he is investing himself more in his filmmaking, accelerating work with his cousin on their music video project. But he is also using Facebook late at night and surfing for videos on YouTube. The evidence of the shift comes in a string of Facebook updates.

Saturday, 11:55 p.m.: “Editing, editing, editing”

Sunday, 3:55 p.m.: “8+ hours of shooting, 8+ hours of editing. All for just a three-minute scene. Mind = Dead.”

Sunday, 11:00 p.m.: “Fun day, finally got to spend a day relaxing… now about that homework…”

Source | New York Times

Storytelling 2.0: Open your books to augmented reality

Friday, November 19th, 2010

In today’s world, suffused with technology’s blue glow, books hearken back to a time when thoughts were more linear than they are in these hyperlinked days. A mere collection of bound pages may no longer suffice for entertainment in the information age. That is where augmented reality (AR) books come in. We are talking books, plus.

Plus what exactly? Most commonly, extra visuals. The standard visual method of augmentation is to use a web cam and custom software to make animations appear on a live screen image of a book. This year saw the launch of a few commercially available AR books, such as Fairyland Magic from Carlton Books and Tyrone the Clean’o'saurus from Salariya Publishing. These books, aimed at children, overlay pages with 3D images. An enthusiast for children’s books myself, I decided to try them out.

Fairyland Magic, like the other AR books in Carlton’s catalogue, was commissioned and written in the usual way, with the animations added on afterwards as a lure to encourage children into books. Though visually pleasing, the computer visualisations require a high-end computer. Even when accessible, the clunkiness of the graphics, the requirement for dextrous wielding of the book in order to make them appear and the fact that the book on screen is a mirror image, making the text appears backwards, meant for me that, while novel, they didn’t add much to my experience of the book.

In contrast, the Salariya titles were built around the AR concept, the technology central to them from the outset. The animations are less temperamental, lengthier and incorporate more movement, all of which serves to bring the characters to life and give them extra personality. Sadly, there is only one animation per book, on the final page. When we tested out Tyrone on the 7 and 9-year-old children available to our editorial department, we were told that the animation was “great” and “cool”. However, their efforts to make the virtual Tyrone fall off his virtual carpet resulted in the software crashing, and our young guinea-pigs quickly got bored and wandered off.

For those of us a little old for these offerings, there are some less commercial AR books out there to fit the bill. Back in 2008, artist Camille Scherrer developed her book Souvenirs du Monde des Montagnes, along with software that broke the mould by using a webcam to recognise the content of the pages in order to correctly place the animated overlays. Scherrer’s book, like those of Salariya, was developed in conjuction with the augmentation. The book itself is a fairytale built around an archive of family photographs from the early part of last century. The animations dance across the pages and then off the book into the surroundings.

At present, all the AR books suffer from the same issue: the animations indiscriminately overlay the webcam input. According to Sherrer, the technology has progressed and the animations can now interact with the readers’ hands. However, she is not sure that the audience is ready for that. “What is funny is that for the public, nobody is impressed by the animation going under the fingers because it seems natural. They are more impressed by the old one where the animation goes over the fingers. I think the technology is going too fast for the public,” she says. “Maybe in five years I will make a book where some animations go under the reader’s hands and some go over – to create layers.”

Additional audio is more or less a part of each of these AR books. However, in its own right audio augmentation can be a powerful tool, as artist Yuri Suzuki has demonstrated. His Barcode Book uses a simple scanner to read barcodes incorporated into the book’s artwork, triggering a related audio playback. In his work with Oscar Diaz, REC&PLAY, he uses old cassette technology to add sound to the very ink on the page. The recording pen, complete with microphone and cassette write head, lays down a layer of ferromagnetic ink – made from heat-treated ferrous oxide, the same material used in cassettes – all the while recording your voice onto the page. The message can then be read out by a second pen with a cassette read head, and a speaker at the other end.





Limiting augmentation to audio overcomes one of the bigger problems faced by visual AR book designers – that of reliance on a computer and screen. Scherrer has gone to some lengths to create as seamless a user interface as possible with Souvenirs, notably by hiding the webcam in a lamp by the light of which the book is read. Scherrer’s method perhaps achieves integration most successfully, but the ultimate AR experience is far from being realised. As Sherrer says, “I would like to make some projection onto the book – the screen for me is a barrier”.

The books available at present do indeed have an added “wow factor” that must not be underestimated, especially when it is used to good effect to enhance the book’s narrative, but in a set-up where the book and augmentation appear on a screen there is a fundamental, jarring discontinuity that detracts from the magic of the experience. Perhaps this will become less important as we become more used to mainstream AR – books or otherwise. At present, AR books and humanity are both still evolving, and in the future the twain shall successfully meet.

Source | New Scientist

Three-dimensional moving holograms breakthrough announced

Sunday, November 14th, 2010

A team led by University of Arizona (UA) optical sciences professor Nasser Peyghambarian has developed a new type of “holographic telepresence” that allows remote projection of a three-dimensional, moving image without the need for special eyewear such as 3D glasses or other auxiliary devices.

The technology is likely to take applications ranging from telemedicine, advertising, updatable 3D maps and entertainment to a new level.

The journal Nature chose the technology to feature on the cover of its Nov. 4 issue.

“Holographic telepresence means we can record a three-dimensional image in one location and show it in another location, in real-time, anywhere in the world,” said Peyghambarian, who led the research effort.

“Holographic stereography has been capable of providing excellent resolution and depth reproduction on large-scale 3D static images,” the authors wrote, “but has been missing dynamic updating capability until now.”

“At the heart of the system is a screen made from a novel photorefractive material, capable of refreshing holograms every two seconds, making it the first to achieve a speed that can be described as quasi-real-time,” said Pierre-Alexandre Blanche, an assistant research professor in the UA College of Optical Sciences and lead author of the Nature paper.

The prototype device uses a 10-inch screen, but Peyghambarian’s group is already successfully testing a much larger version with a 17-inch screen. The image is recorded using an array of regular cameras, each of which views the object from a different perspective. The more cameras that are used, the more refined the final holographic presentation will appear.

That information is then encoded onto a fast-pulsed laser beam, which interferes with another beam that serves as a reference. The resulting interference pattern is written into the photorefractive polymer, creating and storing the image. Each laser pulse records an individual “hogel” in the polymer. A hogel (holographic pixel) is the three-dimensional version of a pixel, the basic units that make up the picture.

The hologram fades away by natural dark decay after a couple of minutes or seconds depending on experimental parameters. Or it can be erased by recording a new 3D image, creating a new diffraction structure and deleting the old pattern.

Peyghambarian explained: “Let’s say I want to give a presentation in New York. All I need is an array of cameras here in my Tucson office and a fast Internet connection. At the other end, in New York, there would be the 3D display using our laser system. Everything is fully automated and controlled by computer. As the image signals are transmitted, the lasers inscribe them into the screen and render them into a three-dimensional projection of me speaking.”

The overall recording setup is insensitive to vibration because of the short pulse duration and therefore suited for industrial environment applications without any special need for vibration, noise or temperature control.

One of the system’s major hallmarks never achieved before is what Peyghambarian’s group calls full parallax: “As you move your head left and right or up and down, you see different perspectives. This makes for a very life-like image. Humans are used to seeing things in 3D.”

The work is a result of a collaboration between the UA and Nitto Denko Technical, or NDT, a company in Oceanside, Calif. NDT provided the polymer sample and media preparation. “We have made major advances in photorefractive polymer film fabrication that allow for the very interesting 3D images obtained in our Nature article,” said Michiharu Yamamoto, vice president at NDT and co-author of the paper.

Potential applications of holographic telepresence include advertising, updatable 3D maps and entertainment. Telemedicine is another potential application: “Surgeons at different locations around the world can observe in 3D, in real time, and participate in the surgical procedure,” the authors wrote.

The system is a major advance over computer-generated holograms, which place high demands on computing power and take too long to be generated to be practical for any real-time applications.

Currently, the telepresence system can present in one color only, but Peyghambarian and his team have already demonstrated multi-color 3D display devices capable of writing images at a faster refresh rate, approaching the smooth transitions of images on a TV screen. These devices could be incorporated into a telepresence setup in the near future.




Source | University of Arizona