Archive for September, 2009

Interface Fantasy: A Lacanian Cyborg Ontology

Thursday, September 24th, 2009

Interface Fantasy
A Lacanian Cyborg Ontology
André Nusselder

Table of Contents and Sample Chapters

interface_fantasy.jpgCyberspace is first and foremost a mental space. Therefore we need to take a psychological approach to understand our experiences in it. In Interface Fantasy, André Nusselder uses the core psychoanalytic notion of fantasy to examine our relationship to computers and digital technology. Lacanian psychoanalysis considers fantasy to be an indispensable “screen” for our interaction with the outside world; Nusselder argues that, at the mental level, computer screens and other human-computer interfaces incorporate this function of fantasy: they mediate the real and the virtual.

Interface Fantasy illuminates our attachment to new media: why we love our devices; why we are fascinated by the images on their screens; and how it is possible that virtual images can provide physical pleasure. Nusselder puts such phenomena as avatars, role playing, cybersex, computer psychotherapy, and Internet addiction in the context of established psychoanalytic theory. The virtual identities we assume in virtual worlds, exemplified best by avatars consisting of both realistic and symbolic self-representations, illustrate the three orders that Lacan uses to analyze human reality: the imaginary, the symbolic, and the real.

Nusselder analyzes our most intimate involvement with information technology—the almost invisible, affective aspects of technology that have the greatest impact on our lives. Interface Fantasy lays the foundation for a new way of thinking that acknowledges the pivotal role of the screen in the current world of information. And it gives an intelligible overview of basic Lacanian principles (including fantasy, language, the virtual, the real, embodiment, and enjoyment) that shows their enormous relevance for understanding the current state of media technology.

Source | MIT Press

Within a generation there will be probably be mass use of artificial wombs to grow babies

Thursday, September 10th, 2009

The end of pregnancy

Within a generation there will be probably be mass use of artificial wombs to grow babies

Jeremy Rifkin

Thursday January 17, 2002

“The womb is a dark and dangerous place,a hazardous environment,” wrote the late Joseph Fletcher, professor of medical ethics at the University of Virginia School of Medicine.

These words have haunted me over the years and have come back to me again in recent weeks, with talk of the imminent prospect of cloning a human being and using embryonic stem cells to create specific body parts to cure diseases.

As shocking as these developments have been, there is still another biological bombshell waiting in the wings – and this one provides the context for all the others and changes forever our concept of human life.

Researchers are working to create a totally artificial womb. Several weeks ago,a team of scientists from Cornell University’s Weill Medical College announced that they had succeeded, for the first time, in creating an artificial womb lining. The scientific team,led by Dr Hung Chiung Liu of the Centre for Reproductive Medicine and Infertility, stimulated cells to grow into uterine lining, using a cocktail of drugs and hormones. The goal of the research is to help infertile couples by creating an entire womb which could be transplanted into a woman.

Yosinori Kuwabara and his colleagues, working in a small research laboratory at Juntendou University in Tokyo,are developing the first operational artificial womb – a clear plastic tank the size of a bread basket, filled with amniotic fluid stabilised at body temperature.For the past several years, Kuwabara and his team have kept goat foetuses alive and growing for up to 10 days by connecting their umbilical cords to two machines that serve as a placenta, pumping in blood, oxygen and nutrients and disposing of waste products. While the plastic womb is still only a prototype, Kuwabara predicts that a fully functioning artificial womb capable of gestating a human foetus may be a reality in less than six years.Others are more sceptical, but say we will probably see the mass use of artificial wombs by the time today’s babies become parents.

Artificial wombs will most likely first be used as intensive care units for foe- tuses in cases where either the mother is ill and can no longer carry the child or where the foetus is ill and needs to be removed from the mother’s womb and cared for where it can be easily monitored. We can already keep foetuses alive in incubators during the last three months of gestation. And researchers routinely fertilise eggs and keep embryos alive in vitro for the first three to four days of their existence before implanting them in a womb. Scientists like Kuwabara are attempting to fill in the time between the beginning and end of the gestation process – the critical period where the foetus develops most of its organs.

Eventually, say many scientists working in the new field of foetal molecular biology, being able to grow a foetus in a totally artificial womb would make it easier to make genetic corrections and modifications – creating designer babies. The artificial womb may even become the preferred means of producing a child. Women could have their eggs removed and men their sperm taken in their teen years when they are most viable and kept frozen until they are ready to have a child. Mothers could spare themselves the rigours and inconveniences of pregnancy, retain their youthful figures and bring the baby home when “done”.

Far fetched? Thousands of surrogate mothers’ wombs have already been used to gestate someone else ‘s fertilised embryos. The artificial womb seems the next logical step in a process that has increasingly removed reproduction from traditional maternity and made of it a laboratory process.

Of course, many women, when asked, say they would prefer to have the experience of being pregnant and having the baby in their own womb. But their expectations might represent the dying sensibilities of the old order. In Aldous Huxley’s Brave New World, the “normal” people were genetically designed, cloned and gestated in artificial wombs – a biological assembly line process churning out ideal genotypes. Only the savages living in the remote reservations still carried their own babies in their bodies and breastfed them after birth. The practice was considered disgusting and something only animals did.

In the Brave New World, erotic sexual activity is encouraged and freely practised but completely divorced from the process of reproduction. Huxley wrote his novel in 1932, before the contraceptive pill had arrived. By the 1970s, however, sex and reproduction had branched into two separate realms, thanks, in large part, to the pill. It is also interesting to note that the pill made its debut at about the same time that researchers first began to use artificial insemination on a wide scale. While the pill revolutionised sex, removing it from the process of reproduction, artificial insemination, then later in vitro fertilisation, egg donation, surrogacy and, soon, cloning further separate the components of reproduction from the biological act of mating. The artificial womb completes the process.

Yet it raises troubling questions. We know that a foetus responds to the mother’s heartbeat, as well as her emotions, moods and movements. A subtle and sophisticated choreographic bond exists between the two and plays a critical role in the development of the foetus. What kind of child will we produce from a liquid medium inside a plastic box? How will gestation in a chamber affect the child ‘s motor functions and emotional and cognitive development? We know that young infants deprived of human touch and bodily contact often are unable to develop the full range of human emotions and sometimes die soon after birth or become violent, sociopathic or withdrawn later in life.

How will the elimination of pregnancy affect the concept of parental responsibility? Will parents feel less attached to their offspring? Will it undermine the sense of generational continuity that is so essential for reproducing and maintaining historical continuity and civilised life?

How will the end of pregnancy affect the way we think about gender and the role of women? Some feminists argue that it will finally mean liberation. Years ago the feminist writer Shulamith Firestone wrote enthusiastically about the prospect of an artificial womb: “Pregnancy is the temporary deformation of the body of the individual for the sake of the species.Moreover,childbirth hurts and isn’t good for you. At the very least, development of an option should make possible an honest examination of the ancient value of motherhood.”

Other feminists view the artificial womb as the final marginalisation of women, robbing them of their primary role as progenitor of the species. The artificial womb, they argue, becomes the quintessential expression of male dominance, a way to create a mechanical substitute of the female womb. Armed with the artificial womb, asexual cloning technology and stem cells to produce all the extra body parts they need, men could free themselves, once and for all, from their dependency on women.

The artificial womb represents the completion of an even longer historic process that began nearly 400 years ago at the dawn of the scientific age. It was Francis Bacon, the father of modern science, who referred to nature as “a common harlot”. He urged future generations to “tame, squeeze, mould” and “shape” her so that “man could become her master and the undisputed sovereign of the physical world”. No doubt some will see the artificial womb as the final triumph of modern science. Others, the ultimate human folly.

Many people will likely say, why worry? Surely the artificial womb is far off on the horizon. Five years ago,we thought the same thing about human cloning and using stem cells to produce body parts.

Jeremy Rifkin is the author of The Biotech Century (Gollancz)and president of the Foundation on Economic Trends in Washington DC

Source | Guardian Unlimited

Postgenderism: Genetic Singularity [part 3]

Thursday, September 10th, 2009

Postgenderism: Genetic Singularity [part 2]

Thursday, September 10th, 2009

A Turing Test for Computer Game Bots

Thursday, September 10th, 2009

Can a computer fool expert gamers into believing it’s one of them? That was the question posed at the second annual BotPrize, a three-month contest that concluded today at the IEEE Computational Symposium on Intelligence and Games in Milan.

The contest challenges programmers to create a software “bot” to control a game character that can pass for human, as judged by a panel of experts. The goal is not only to improve AI in entertainment, but also to fuel advances in non-gaming applications of AI. The BotPrize challenge is a variant of the Turing test, devised by Alan Turing, which challenges a machine to convince a panel of judges that it is a human in a text-only conversation.

“The BotPrize is important for AI in gaming because it aims to show how AI can make games more fun to play, by providing more interesting opponents for game players,” says Philip Hingston, associate professor in the School of Computer and Information Science at Edith Cowan University in Perth, Australia, and an overseer of the competition. “It is also important for AI in general because it highlights a central question in AI: How is human intelligence related to computer intelligence?”

This year’s BotPrize drew 15 entrants from Japan, the United Kingdom, the United States, Italy, Spain, Brazil, and Canada. Entrants created bots for Unreal Tournament 2004, a first-person shoot-’em-up in which gamers compete against each other for the most virtual kills. For the contest, in-game chatting was disabled so that bots could be evaluated for their so-called “humanness” by “physical” behavior alone. And, to elicit more spontaneity, contestants were given weapons that behaved differently from the ones ordinarily used in the game.

Each expert judge on the prize panel took turns shooting against two unidentified opponents-one human-controlled, the other a bot created by a contestant. After 10 to 15 minutes, the judge tried to identify the AI. To win the big prize, worth $6,000, a bot had to fool at least 80% of the judges. As in last year’s competition, however, none of the participants was able to pull off this feat. A minor award worth $1,700, for the most “human-like” bot, was awarded to Jeremy Cathran, from the University of Southern California, for his entry, called sqlitebot.

Artificial intelligence has long been crucial to creating convincing and compelling computer games, whether a player is competing against drivers in Mario Kart on the Nintendo Wii or alien invaders in Halo 3 for the Microsoft’s Xbox 360 games console. And, as competition increases in the $21 billion game industry, developers are striving to make game AI even more convincing. But creating a good bot presents a formidable challenge, says Steve Polge, lead programmer of Epic Games, the company that created Unreal Tournament. “You don’t always want your AI to perform just like a human,” he says. “Humans can be pretty annoying and obnoxious opponents.” Instead, Polge says, developers often strive for “AI that can make unexpected plans and present emergent and surprising challenges to the player, which will definitely lead to better games.”

Risto Miikkulainen, a professor of computer science and neuroscience at the University of Texas at Austin, was among the BotPrize participants who tried to concoct just the right mix of human and machine. When coding a bot for this year’s contest, Miikkulainen and his team designed the bot to learn quickly. “When humans play games, they adapt very quickly,” he says, “so in creating a bot, you can’t aim to be 100% accurate, because adaptation is inexact.”

The BotPrize is an attempt not only to improve game technology, but also to foster innovations outside the industry, from AI used in emergency training simulations today to the companion robots of the future. “You need some way to measure milestones in AI research,” says Robert Epstein, creator and former director of the annual Loebner Prize Competition in Artificial Intelligence, which involves a conventional Turing test. “So when you arrange contests like the BotPrize, you have a way of knowing whether we reached a milestone.”

Will Wright, creator of best-selling simulation games such as The Sims and Spore, hopes the BotPrize encourages AI researchers to pursue the most elusively human quality of all: emotion. “Machine interactions are becoming a ubiquitous part of our environment, but they’re not necessarily the most satisfying,” Wright says, “so acknowledging our emotional dimension is an interesting task to go for in AI.”

This means developing bots that not only fool people but also move them emotionally. “You want to build an emotional model for the agent you’re competing with,” Wright says. “It’s not just about having an accurate aim. It’s about creating a bot that simulates a victory dance above your dead corpse.”

Source | Technology Review

Memories Exist Even When Forgotten, Study Suggests

Thursday, September 10th, 2009

ScienceDaily (Sep. 10, 2009) — A woman looks familiar, but you can’t remember her name or where you met her. New research by UC Irvine neuroscientists suggests the memory exists – you simply can’t retrieve it.

Using advanced brain imaging techniques, the scientists discovered that a person’s brain activity while remembering an event is very similar to when it was first experienced, even if specifics can’t be recalled.

“If the details are still there, hopefully we can find a way to access them,” said Jeff Johnson, postdoctoral researcher at UCI’s Center for the Neurobiology of Learning & Memory and lead author of the study, appearing Sept. 10 in the journal Neuron.

“By understanding how this works in young, healthy adults, we can potentially gain insight into situations where our memories fail more noticeably, such as when we get older,” he said. “It also might shed light on the fate of vivid memories of traumatic events that we may want to forget.”

In collaboration with scientists at Princeton University, Johnson and colleague Michael Rugg, CNLM director, used functional magnetic resonance imaging to study the brain activity of students.

Inside an fMRI scanner, the students were shown words and asked to perform various tasks: imagine how an artist would draw the object named by the word, think about how the object is used, or pronounce the word backward in their minds. The scanner captured images of their brain activity during these exercises.

About 20 minutes later, the students viewed the words a second time and were asked to remember any details linked to them. Again, brain activity was recorded.

Utilizing a mathematical method called pattern analysis, the scientists associated the different tasks with distinct patterns of brain activity. When a student had a strong recollection of a word from a particular task, the pattern was very similar to the one generated during the task. When recollection was weak or nonexistent, the pattern was not as prominent but still recognizable as belonging to that particular task.

“The pattern analyzer could accurately identify tasks based on the patterns generated, regardless of whether the subject remembered specific details,” Johnson said. “This tells us the brain knew something about what had occurred, even though the subject was not aware of the information.”

In addition to Johnson and Rugg, Susan McDuff and Kenneth Norman of Princeton worked on the study, funded by the National Institutes of Health.

Source | Science Daily

Postgenderism: Genetic Singularity [part 1]

Thursday, September 10th, 2009

Futuris – Robots designed to roam city streets

Tuesday, September 1st, 2009

A new breed of robots is being designed for life on our city streets.To meet this challenge they must be able to navigate with ease and interact freely with humans. We meet the research prototypes face to face.

What is Human?

Tuesday, September 1st, 2009

World Science Festival 2008: What It Means To Be Human. Part 1 of 5 from World Science Festival on Vimeo.

The world will become our display, Map/Territory: Augmented Reality

Tuesday, September 1st, 2009

Map/Territory from timo on Vimeo.

The 30-second video above shows a woman interacting with a map on the ground. She navigates and zooms the plaza-sized subway map with simple hand gestures. Like all good concept pieces it leaves the “how” for a developer to be inspired by and leaves the “what” for us to marvel at.

New Orleans Arcology Habitat

Tuesday, September 1st, 2009

Tangram 3DS, a firm specializing in visualization and computer animation, announced its collaboration with E. Kevin Schopfer AIA, RIBA. Together, the companies have designed and presented a bold new urban platform. New Orleans Arcology Habitat (NOAH) is a proposed urban Arcology (architecture and ecology), whose philosophic underpinnings rest in combining large scale sustainability with concentrated urban structures, and in this case a floating city.

Life on Mars: An Interview with Pete Worden

Tuesday, September 1st, 2009

h+: What have we learned recently about Mars and the possibility of life there?
PETE WORDEN: Well, from what we’ve seen and the missions we’ve had, Mars is obviously an environment that can support large-scale human activity. It has substantial quantities of water and other “volatiles” — carbon compounds and so forth — so it clearly can support life. In fact, it may already be supporting life, and that’s one of the main things we need to find out before we do anything, because there may be microbial life below the surface of the planet.

We announced recently — and this is actually from Earth-based observations — that there is evidence of variable methane on the planet. This could mean that there is some sort of geologic activity going on underground, with its own source of heat that would melt water and allow flows underground. This would be exciting in its own right, for we have long thought that Mars was a geologically inactive, “cold” planet, like the Moon. However, since life can also cause methane to be produced, our first objective is to find out if there is already life there.

What is your opinion about life existence on Mars?

Read Full Article | H+ Magazine

Why AI is a dangerous dream

Tuesday, September 1st, 2009

Robotics expert Noel Sharkey used to be a believer in artificial intelligence. So why does he now think that AI is a dangerous myth that could lead to a dystopian future of unintelligent, unfeeling robot carers and soldiers? Nic Fleming finds out

What do you mean when you talk about artificial intelligence?

I like AI pioneer Marvin Minsky‘s definition of AI as the science of making machines do things that would require intelligence if done by humans. However, some very smart human things can be done in dumb ways by machines. Humans have a very limited memory, and so for us, chess is a difficult pattern-recognition problem that requires intelligence. A computer like Deep Blue wins by brute force, searching quickly through the outcomes of millions of moves. It is like arm-wrestling with a mechanical digger. I would rework Minsky’s definition as the science of making machines do things that lead us to believe they are intelligent.

Are machines capable of intelligence?

If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.

Are we close to building a machine that can meaningfully be described as sentient?

I’m an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to “believers” in the computational theory of mind, some of their arguments are almost religious. They say, “What else could there be? Do you think mind is supernatural?” But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.

The mind could be a type of physical system that cannot be recreated by computer

So why are predictions about robots taking over the world so common?

There has always been fear of new technologies based upon people’s difficulties in understanding rapid developments. I love science fiction and find it inspirational, but I treat it as fiction. Technological artefacts do not have a will or a desire, so why would they “want” to take over? Isaac Asimov said that when he started writing about robots, the idea that robots were going to take over the world was the only story in town. Nobody wants to hear otherwise. I used to find when newspaper reporters called me and I said I didn’t believe AI or robots would take over the world, they would say thank you very much, hang up and never report my comments.

You describe AI as the science of illusion.

It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic – the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.

These views are in stark contrast to those of many of your peers in the robotics field.

Yes. Roboticist Hans Moravec says that computer processing speed will eventually overtake that of the human brain and make them our superiors. The inventor Ray Kurzweil says humans will merge with machines and live forever by 2045. To me these are just fairy tales. I don’t see any sign of it happening. These ideas are based on the assumption that intelligence is computational. It might be, and equally it might not be. My work is on immediate problems in AI, and there is no evidence that machines will ever overtake us or gain sentience.

And you believe that there are dangers if we fool ourselves into believing the AI myth…

It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.

How would you feel about a robot carer looking after you in old age?

Eldercare robotics is being developed quite rapidly in Japan. Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers. A robot companion would not fulfil that need for me.

You also have concerns about military robots.

The many thousands of robots in the air and on the ground are producing great military advantages, which is why at least 43 countries have development programmes of their own. No one can deny the benefit of their use in bomb disposal and surveillance to protect soldiers’ lives. My concerns are with the use of armed robots. Drone attacks are often reliant on unreliable intelligence in the same way as in Vietnam, where the US ended up targeting people who were owed gambling debts by its informants. This over-reaching of the technology is killing many innocent people. Recent US planning documents show there is a drive towards developing autonomous killing machines. There is no way for any AI system to discriminate between a combatant and an innocent. Claims that such a system is coming soon are unsupportable and irresponsible.

Is this why you are calling for ethical guidelines and laws to govern the use of robots?

In the areas of robot ethics that I have written about – childcare, policing, military, eldercare and medical – I have spent a lot of time looking at current legislation around the world and found it wanting. I think there is a need for urgent discussions among the various professional bodies, the citizens and the policy makers to decide while there is still time. These developments could be upon us as fast as the internet was, and we are not prepared. My fear is that once the technological genie is out of the bottle it will be too late to put it back.

The organisers of the robot soccer competition RoboCup aim to develop an autonomous robot soccer team that can beat a human team by 2050. How do you rate their chances?

Football requires a certain kind of intelligence. Someone like David Beckham can look at the movement of the players, predict where the ball is likely to go and put himself in the right place. Soccer robots can move quickly, punch the ball hard and get it accurately into the net, but they cannot look at the pattern of the game and guess where the ball is going to end up. I can’t see robots matching humans at football strategy. But in the 1960s everyone was pretty sure that AI would never succeed at championship chess, so who knows? Like chess programs, soccer robots may win by brute force – although I don’t think they will be very good at faking fouls.


Born in Belfast, UK, Noel Sharkey left school at 15, working as an apprentice electrician, railway worker, guitarist and chef, before studying psychology and getting his PhD at the University of Exeter. He has held positions at Yale, Stanford and Berkeley, and is now professor of artificial intelligence and robotics at the University of Sheffield. He hosts The Sound of Science radio show (

 Source | New Scientist

‘Plasmobot’: Scientists To Design First Robot Using Mould

Tuesday, September 1st, 2009

Scientists at the University of the West of England are to design the first ever biological robot using mould.

Researchers have received a Leverhulme Trust grant worth £228,000 to develop the amorphous non-silicon biological robot, plasmobot, using plasmodium, the vegetative stage of the slime mould Physarum polycephalum, a commonly occurring mould which lives in forests, gardens and most damp places in the UK. The Leverhulme Trust funded research project aims to design the first ever fully biological (no silicon components) amorphous massively-parallel robot.

This project is at the forefront of research into unconventional computing. Professor Andy Adamatzky, who is leading the project, says their previous research has already proved the ability of the mould to have computational abilities.

Professor Adamatzky explains, “Most people’s idea of a computer is a piece of hardware with software designed to carry out specific tasks. This mould, or plasmodium, is a naturally occurring substance with its own embedded intelligence. It propagates and searches for sources of nutrients and when it finds such sources it branches out in a series of veins of protoplasm. The plasmodium is capable of solving complex computational tasks, such as the shortest path between points and other logical calculations. Through previous experiments we have already demonstrated the ability of this mould to transport objects. By feeding it oat flakes, it grows tubes which oscillate and make it move in a certain direction carrying objects with it. We can also use light or chemical stimuli to make it grow in a certain direction.

“This new plasmodium robot, called plasmobot, will sense objects, span them in the shortest and best way possible, and transport tiny objects along pre-programmed directions. The robots will have parallel inputs and outputs, a network of sensors and the number crunching power of super computers. The plasmobot will be controlled by spatial gradients of light, electro-magnetic fields and the characteristics of the substrate on which it is placed. It will be a fully controllable and programmable amorphous intelligent robot with an embedded massively parallel computer.”

This research will lay the groundwork for further investigations into the ways in which this mould can be harnessed for its powerful computational abilities.

Professor Adamatzky says that there are long term potential benefits from harnessing this power, “We are at the very early stages of our understanding of how the potential of the plasmodium can be applied, but in years to come we may be able to use the ability of the mould for example to deliver a small quantity of a chemical substance to a target, using light to help to propel it, or the movement could be used to help assemble micro-components of machines. In the very distant future we may be able to harness the power of plasmodia within the human body, for example to enable drugs to be delivered to certain parts of the human body. It might also be possible for thousands of tiny computers made of plasmodia to live on our skin and carry out routine tasks freeing up our brain for other things. Many scientists see this as a potential development of amorphous computing, but it is purely theoretical at the moment.”

Professor Adamatzky has recently edited and had published by Springer, ‘Artificial Life Models in Hardware’ aimed at students and researchers of robotics. The book focuses on the design and real-world implementation of artificial life robotic devices and covers a range of hopping, climbing, swimming robots, neural networks and slime mould and chemical brains.

Source | Science Daily

Your Cyborg Eye Will Talk to You

Tuesday, September 1st, 2009

Just as many of us are getting used to augmented reality applications for cellphones and digital cameras, Babak Amir Parviz and his University of Washington students are taking it one step further. The group is working on a human machine interface where LEDs are embedded into contact lenses in order to display information to the wearer. You heard right, in a few years your cyborg eye will talk to you. In an article with the IEEE Spectrum, Parviz relays the challenges of custom-building semi-transparent circuitry into a polymer lens roughly 1.2 millimeters in diameter.

Says Parviz, “We’re starting with a simple product, a contact lens with a single light source, and we aim to work up to more sophisticated lenses that can superimpose computer-generated high-resolution color graphics on a user’s real field of vision.”

For now, Parviz mentions that single pixel visual cues for gamers and the hearing impaired are already quite possible with the lens prototypes. The group has also experimented with non-invasive biomonitoring including checking glucose levels for diabetics.

Some of the obvious challenges of building an augmented reality contact lens include:
1. The Need for Custom Parts: Regular circuitry and LEDs are incompatible with regular contact lenses. Every piece of this project must be fabricated from scratch.

2. Physical Constraints: The group must attempt to fit transistors, radio chips, antennas, diffusion resistors, LEDs and photodetectors onto a miniscule polymer disc. Additionally, the team is required to control lens position and light intensity relative to the pupil. And finally, because the lens is so close to the corneal surface, the group must project images away from the cornea using either micro-lenses or lasers.
3. User Safety: In addition to protecting the eye against chemicals, heat and toxins, the lens components must be semi-transparent in order for the wearer to view their surroundings.

“We already see a future in which the humble contact lens becomes a real platform, like the iPhone is today, with lots of developers contributing their ideas and inventions. As far as we’re concerned, the possibilities extend as far as the eye can see.” And you thought the iPhone SDK was a tough nut to crack.

For Parviz’s complete seven page article, check out the IEEE Spectrum’s Biomedical page.

Source | NY Times