Archive for the ‘SENS’ Category
The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel. Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter. Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.
There are reasons for this refocusing of the lens or eye toward body. Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago. The body truly is changing, both biologically and technologically, at an abrupt rate. Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation. Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today. They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves. The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.
We see in the work of Orlan and Stelarc that the body becomes the malleable canvas. Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution. In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body. Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings. Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world. Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses. Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body. Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth. Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.
Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine. Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance. Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes. Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.
The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind. In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order. Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing. Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations. Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world. The installation includes an array of visual stimulation in a dark environment. As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes. This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.
Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs. By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement. Here we see the body acting as brush; literally painting the soundscape through its own creative motion. As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body. For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.
Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense. Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with. Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state. The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide. Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time. As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge. Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self. Can we begin to mold and sculpt the body much like we have done with the land we inhabit?
There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori. The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement. In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities. Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain. As you browse the work of The Biological Canvas I would like to invite your own imagination to engage. Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.
Source | VASA Project
Researchers at MIT have discovered a gene called NDT80 that can double yeast lifespan when turned on late in life.
The gene is activated when yeast cell rejuvenation occurs. When they turned on this gene in aged cells that were not reproducing, the cells lived twice as long as normal.
The MIT team found that the signs of cellular aging disappear at the very end of meiosis (which produces spores). “There’s a true rejuvenation going on,” said professor Angelika Amon.
In aged cells with activated NDT80, the nucleolar damage was the only age-related change that disappeared. That suggests that nucleolar changes are the primary force behind the aging process, Amon said.
If the human cell lifespan is controlled in a similar way, it could offer a new approach to rejuvenating human cells or creating pluripotent stem cells, Amon said.
Source | Kurzweil AI
The researchers used saliva samples contributed by 34 pairs of identical male twins between the ages of 21 and 55. They scoured the men’s genomes and identified 88 sites on the DNA that strongly correlated methylation to age. They replicated their findings in a general population of 31 men and 29 women between the ages of 18 and 70.
Next, the scientists built a predictive model using two of the three genes with the strongest age-related linkage to methylation. When they plugged in the data from the twins’ and the other group’s saliva samples, they were able to correctly predict a person’s age within five years — an unprecedented level of accuracy.
A newly patented test based on the research could offer crime-scene investigators a new forensic tool for pinpointing a suspect’s age.
Source | Kurzweil AI
A yet-unidentified component of coffee interacts with caffeine, a possible reason why daily coffee intake protects against Alzheimer’s disease, researchers at the University of South Florida have found.
One clue: they found that caffeinated coffee induces an increase in blood levels of a growth factor called GCSF (granulocyte colony stimulating factor) in mice. GCSF is greatly decreased in patients with Alzheimer’s disease and is demonstrated to improve memory in Alzheimer’s mice.
The researchers said this is not possible with other caffeine-containing drinks or decaffeinated coffee.
An increasing body of scientific literature indicates that moderate consumption of coffee also decreases the risk of Alzheimer’s, Parkinson’s disease, Type II diabetes, and stroke. Recent studies have reported that drinking coffee in moderation may also significantly reduce the risk of breast and prostate cancers.
The researchers suggest that moderate daily coffee intake (4 to 5 cups a day) starting at least by middle age (30s–50s) is optimal for providing protection against Alzheimer’s disease, although from their studies, starting even in older age appears protective.
Source | Kurzweil AI
It seems the sci-fi industry has done it again. Predictions made in novels like Johnny Mnemonic and Neuromancer back in the 1980s of neural implants linking our brains to machines have become a reality.
Back then it seemed unthinkable that we’d ever have megabytes stashed in our brain as Keanu Reeves’ character Johnny Mnemonic did in the movie based on William Gibson’s novel. Or that The Matrix character Neo could have martial arts abilities uploaded to his brain, making famous the line, “I know Kung Fu.” (Why Keanu Reeves became the poster boy of sci-fi movies, I’ll never know.) But today we have macaque monkeys that can control a robotic arm with thoughts alone. We have paraplegics given the ability to control computer cursors and wheelchairs with their brain waves. Of course this is about the brain controlling a device. But what about the other direction where we might have a device amplifying the brain? While the cochlear implant might be the best known device of this sort, scientists have been working on brain implants with the goal to enhance memory. This sort of breakthrough could lead to building a neural prosthesis to help stroke victims or those with Alzheimer’s. Or at the extreme, think uploading Kung Fu talent into our brains.
Decade-long work led by Theodore Berger at University of Southern California, in collaboration with teams from Wake Forest University, has provided a big step in the direction of artificial working memory. Their study is finally published today in the Journal of Neural Engineering. A microchip implanted into a rat’s brain can take on the role of the hippocampus—the area responsible for long-term memories—encoding memory brain wave patterns and then sending that same electrical pattern of signals through the brain. Back in 2008, Berger told Scientific American, that if the brain patterns for the sentence, “See Spot Run,” or even an entire book could be deciphered, then we might make uploading instructions to the brain a reality. “The kinds of examples [the U.S. Department of Defense] likes to typically use are coded information for flying an F-15,” Berger is quoted in the article as saying.
In this current study the scientists had rats learn a task, pressing one of two levers to receive a sip of water. Scientists inserted a microchip into the rat’s brain, with wires threaded into their hippocampus. Here the chip recorded electrical patterns from two specific areas labeled CA1 and CA3 that work together to learn and store the new information of which lever to press to get water. Scientists then shut down CA1 with a drug. And built an artificial hippocampal part that could duplicate such electrical patterns between CA1 and CA3, and inserted it into the rat’s brain. With this artificial part, rats whose CA1 had been pharmacologically blocked, could still encode long-term memories. And in those rats who had normally functioning CA1, the new implant extended the length of time a memory could be held.
The next step is to test the device in monkeys, and then in humans. Of course at this early stage a breakthrough like this brings up more questions than solutions. Memory is hugely complex, based on our individual experiences and perceptions. If we have the electrical pattern for the phrase, See Spot Run, mentioned above, would this mean the same thing for you as it does for me? How would such a device work within context? As writer Gary Stix asked in the Scientific American article, “Would “See Spot Run” be misinterpreted as laundry mishap instead of a trotting dog?” Or as the science journalist John Horgan once put it, you might hear your wedding song, but I hear a stale pop tune.
We are provided with the same structural blueprint for our brains, but its circuitry is built from experience and genetics, and this is a tapestry unique to each of us. Something that many scientists feel we’ll never be able to fully crack and decode, let alone insert into it an experiential memory.
Source | Smart Planet
Ray Kurzweil and Barry Ptolemy appeared on the Charlie Rose show Friday night to discuss the movie Transcendent Man, directed by Barry Ptolemy. You can see the interview here. You can also watch “In Charlie’s Green Room with Ray Kurzweil,” recorded the same evening.
Transcendent Man by Barry Ptolemy focuses on the life and ideas of Ray Kurzweil. It is currently available on iTunes in the United States and Canada and on DVD. Tickets to the London and San Francisco screenings in April are available.
A new search tool developed by researchers at Microsoft indexes medical images of the human body, rather than the Web. On CT scans, it automatically finds organs and other structures, to help doctors navigate in and work with 3-D medical imagery.
CT scans use X-rays to capture many slices through the body that can be combined to create a 3-D representation. This is a powerful tool for diagnosis, but it’s far from easy to navigate, says Antonio Criminisi, who leads a group at Microsoft Research Cambridge, U.K., that is attempting to change that. “It is very difficult even for someone very trained to get to the place they need to be to examine the source of a problem,” he says.
When a scan is loaded into Criminisi’s software, the program indexes the data and lists the organs it finds at the side of the screen, creating a table of hyperlinks for the body. A user can click on, say, the word “heart” and be presented with a clear view of the organ without having to navigate through the imagery manually.
Once an organ of interest has been found, a 2-D and an enhanced 3-D view of structures in the area are shown to the user, who can navigate by touching the screen on which the images are shown. A new scan can also be automatically and precisely matched up alongside a past one from the same patient, making it easy to see how a condition has progressed or regressed.
Criminisi’s software uses the pattern of light and dark in the scan to identify particular structures; it was developed by training machine-learning algorithms to recognize features in hundreds of scans in which experts had marked the major organs. Indexing a new scan takes only a couple of seconds, says Criminisi. The system was developed in collaboration with doctors at Addenbrookes Hospital in Cambridge, U.K.
The Microsoft research group is exploring the use of gestures and voice to control the system. They can plug in the Kinect controller, ordinarily used by gamers to control an Xbox with body movements, so that surgeons can refer to imagery in mid-surgery without compromising their sterile gloves by touching a keyboard, mouse, or screen.
Kenji Suzuki an assistant professor at the University of Chicago, whose research group works on similar tools, says the Microsoft software has the potential to improve patient care, providing it really does make scans easier to navigate. “As medical imaging has advanced, so many images are produced that there is a kind of information overload,” he explains. “The workload has grown a lot.”
Suzuki says Microsoft’s approach is a good one, but that medical professionals might be more receptive to the design if it indexed signs of disease, not just organs. His own research group has developed software capable of recognizing potentially cancerous lung nodules; in trials, it made half as many mistakes as a human expert.
Criminisi sticks by the notion of using organs as a kind of navigation system but says that disease-spotting capability is also under development. He says, “We are working to train it to detect differences between different grades of glioma tumor”—a type of brain tumor.
The Microsoft group also intends the tool to be used at large scales. It could automatically index a collection of 3-D scans or other images, making possible new ways of tracking medical records, says Criminisi. Today, records are kept as text that describes scans and other information. A search tool that finds the word “heart”, for example, would not know if that meant it appeared in a scan or was mentioned in another context. If a hospital’s computer system indexed new scans, the Microsoft software could automatically record what was imaged in a person’s records and when.
Source | Technology Review
Researchers at Queen’s University Belfast are taking the first step towards discovering the true effectiveness of brain training exercises with the release of their own app aimed at those over 50.
The Brain Jog application is available to download free for iPhone, iPod or iPad. It is the product of 18 months of work by researchers at Queen’s School of Music and Sonic Arts to find out what the over 50′s are looking for in a brain training app.
Queen’s researchers are encouraging as many people as possible to download and use the application. During the process, users will be asked to give feedback on their experience of playing the game. Using this information to determine what makes a good puzzle experience, the research team will continuously improve and adapt the games to make them as user friendly as possible – thereby maximising the number of people who play on a regular, long-term basis.
In the next stage of the project, the researchers hope to track the experience and performance of these long-term players to help clarify the effects of regular brain training on ageing minds.
The research is led by Donal O’Brien, a PhD student at Queen’s Sonic Arts Research Centre. He said: “Brain Jog consists of four enjoyable mini games specifically designed to test and improve four areas – spatial ability, memory, mathematical ability and verbal fluency.
“This is achieved through problem solving, puzzles and reverse arithmetic, allowing users to be challenged in an engaging manner, and improve their performance with regular practice.
“Brain Jog is unique among similar apps in that it has come to fruition after extensive research and collaboration with the target audience to find out exactly what appeals to them.
“By downloading this app, you can help us create a fantastic game experience for those over 50 and bring us one step closer to finding out whether or not brain training can help prevent cognitive decline and dementia.
“To participate, simply download the application for free from iTunes, answer a few questions and then play the games. There are no obligations – you can play as often as you like and stop whenever you choose.
“Plans are in place for a future study on dementia prevention using the app; but before that can happen, people of all ages are encouraged to get downloading and have fun while providing vital information to our researchers and keeping their brain active.”
Source | Queen’s University Belfast
Researchers are developing a specialized skin “printing” system that could be used in the future to treat soldiers wounded on the battlefield.
Scientists at the Wake Forest Institute for Regenerative Medicine were inspired by standard inkjet printers found in many home offices.
“We started out by taking a typical desktop inkjet cartridge. Instead of ink we use cells, which are placed in the cartridge,” said Dr. Anthony Atala, director of the institute.
The device could be used to rebuild damaged or burned skin.
The project is in pre-clinical phases and may take another five years of development before it is ready to be used on human burn victims, he said.
Other universities, including Cornell University and the Medical University of South Carolina, Charleston, are working on similar projects and will speak on the topic on Sunday at the American Association for the Advancement of Science conference in Washington. These university researchers say organs — not just skin — could be printed using similar techniques.
Burn injuries account for 5% to 20% of combat-related injuries, according to the Armed Forces Institute of Regenerative Medicine. The skin printing project is one of several projects at Wake Forest largely funded by that institute, which is a branch of the U.S. Department of Defense.
Wake Forest will receive approximately $50 million from the Defense Department over the next five years to fund projects, including the skin-creating system.
Researchers developed the skin “bio-printer” by modifying a standard store-bought printer. One modification is the addition of a three-dimensional “elevator” that builds on damaged tissue with fresh layers of healthy skin.
The skin-printing process involves several steps. First, a small piece of skin is taken from the patient. The sample is about half the size of a postage stamp, and it is taken from the patient by using a chemical solution.
Those cells are then separated and replicated on their own in a specialized environment that catalyzes this cell development.
“We expand the cells in large quantities. Once we make those new cells, the next step is to put the cells in the printer, on a cartridge, and print on the patient,” Atala said.
The printer is then placed over the wound at a distance so that it doesn’t touch the burn victim. “It’s like a flat-bed scanner that moves back and forth and put cells on you,” said Atala.
Once the new cells have been applied, they mature and form new skin.
Specially designed printer heads in the skin bio-printer use pressurized nozzles — unlike those found in traditional inkjet printers.
The pressure-based delivery system allows for a safe distance between the printer and the patient and can accommodate a variety of body types, according to a 2010 report from the Armed Forces Institute of Regenerative Medicine.
The device can fabricate healthy skin in anywhere from minutes to a few hours, depending on the size and type of burn, according to the report.
“You are building up the cells layer after layer after layer,” Atala said.
Acquiring an adequate sample can be a challenge in victims with extensive burns, he said, since there is sometimes “not enough (skin) to go around with a patient with large burns,” Atala said.
The sample biopsy would be used to grow new cells then placed in the printer cartridge, said Atala.
Researchers said it is difficult to speculate when the skin printer may be brought to the battlefield, because of the stringent regulatory steps for a project of this nature. Once the skin-printing device meets federal regulations, military officials are optimistic it will benefit the general population as well as soldiers.
“We’re not making anything military-unique,” said Terry Irgens, a program director at the U.S Army Medical Materiel Development Activity.
“We hope it will benefit both soldier and civilian,” he said.
In the meantime, researchers said they’re pleased with results of preliminary laboratory testing with the skin printer.
Atala said the researchers already have been able to make “healthy skin.”
Source | CNN
The human brain operates as a highly interconnected small-world network, not as a collection of discrete regions as previously believed, with important implications for why many of us experience cognitive declines in old age, a new study shows.
Using graph theory, Australian researchers have mapped the brain’s neural networks and for the first time linked them with specific cognitive functions, such as information processing and language. Results from the study are published in the prestigious Journal of Neuroscience.
The researchers from the University of New South Wales are now examining what factors may influence the efficiency of these networks in the hope they can be manipulated to reduce age-related decline.
“While particular brain regions are important for specific functions, the capacity of information flow within and between regions is also crucial,” said study leader Scientia Professor Perminder Sachdev from UNSW’s School of Psychiatry.
“We all know what happens when road or phone networks get clogged or interrupted. It’s much the same in the brain.
“With age, the brain network deteriorates and this leads to slowing of the speed of information processing, which has the potential to impact on other cognitive functions.”
The advent of new MRI technology and increased computational power had allowed the development of the neural maps, resulting in a paradigm shift in the way scientists view the brain, Professor Sachdev said.
“In the past when people looked at the brain they focused on the grey matter in specific regions because they thought that was where the activity was. White matter was the poor cousin. But white matter is what connects one brain region to another and without the connections grey matter is useless,” he said.
In the study, the researchers performed magnetic resonance imaging (MRI) scans on 342 healthy individuals aged 72 to 92, using a new imaging technique called diffusion tensor imaging (DTI).
Using a mathematical technique called graph theory, they plotted and measured the properties of the neural connectivity they observed.
“We found that the efficiency of the whole brain network of cortical fibre connections had an influence on processing speed, visuospatial function – the ability to navigate in space – and executive function,” said study first author Dr Wei Wen.
“In particular greater processing speed was significantly correlated with better connectivity of nearly all the cortical regions of the brain.”
Professor Sachdev said the findings help explain how cognitive functions are organised in the brain, and the more highly distributed nature of some functions over others.
“We are now examining the factors that affect age-related changes in brain network efficiency – whether they are genetic or environmental – with the hope that we can influence them to reduce age-related decline,” Professor Sachdev said.
“We know the brain is not immutable; that if we work on the plasticity in these networks we may be able to improve the efficiency of the connections and therefore cognitive functions.”
Source | University of New South Wales
Lenses that monitor eye health are on the way, and in-eye 3D image displays are being developed too – welcome to the world of augmented vision
THE next time you gaze deep into someone’s eyes, you might be shocked at what you see: tiny circuits ringing their irises, their pupils dancing with pinpricks of light. These smart contact lenses aren’t intended to improve vision. Instead, they will monitor blood sugar levels in people with diabetes or look for signs of glaucoma.
The lenses could also map images directly onto the field of view, creating head-up displays for the ultimate augmented reality experience, without wearing glasses or a headset. To produce such lenses, researchers are merging transparent, eye-friendly materials with microelectronics.
In 2008, as a proof of concept, Babak Parviz at the University of Washington in Seattle created a prototype contact lens containing a single red LED. Using the same technology, he has now created a lens capable of monitoring glucose levels in people with diabetes.
It works because glucose levels in tear fluid correspond directly to those found in the blood, making continuous measurement possible without the need for thumb pricks, he says. Parviz’s design calls for the contact lens to send this information wirelessly to a portable device worn by diabetics, allowing them to manage their diet and medication more accurately.
Lenses that also contain arrays of tiny LEDs may allow this or other types of digital information to be displayed directly to the wearer through the lens. This kind of augmented reality has already taken off in cellphones, with countless software apps superimposing digital data onto images of our surroundings, effectively blending the physical and online worlds.
Making it work on a contact lens won’t be easy, but the technology has begun to take shape. Last September, Sensimed, a Swiss spin-off from the Swiss Federal Institute of Technology in Lausanne, launched the very first commercial smart contact lens, designed to improve treatment for people with glaucoma.
The disease puts pressure on the optic nerve through fluid build-up, and can irreversibly damage vision if not properly treated. Highly sensitive platinum strain gauges embedded in Sensimed’s Triggerfish lens record changes in the curvature of the cornea, which correspond directly to the pressure inside the eye, says CEO Jean-Marc Wismer. The lens transmits this information wirelessly at regular intervals to a portable recording device worn by the patient, he says.
Like an RFID tag or London’s Oyster travel cards, the lens gets its power from a nearby loop antenna – in this case taped to the patient’s face. The powered antenna transmits electricity to the contact lens, which is used to interrogate the sensors, process the signals and transmit the readings back.
Each disposable contact lens is designed to be worn just once for 24 hours, and the patient repeats the process once or twice a year. This allows researchers to look for peaks in eye pressure which vary from patient to patient during the course of a day. This information is then used to schedule the timings of medication.
“The timing of these drugs is important,” Wisner says.
Parviz, however, has taken a different approach. His glucose sensor uses sets of electrodes to run tiny currents through the tear fluid and measures them to detect very small quantities of dissolved sugar. These electrodes, along with a computer chip that contains a radio frequency antenna, are fabricated on a flat substrate made of polyethylene terephthalate (PET), a transparent polymer commonly found in plastic bottles. This is then moulded into the shape of a contact lens to fit the eye.
Parviz plans to use a higher-powered antenna to get a better range, allowing patients to carry a single external device in their breast pocket or on their belt. Preliminary tests show that his sensors can accurately detect even very low glucose levels. Parvis is due to present his results later this month at the IEEE MEMS 2011 conference in Cancún, Mexico.
“There’s still a lot more testing we have to do,” says Parviz. In the meantime, his lab has made progress with contact lens displays. They have developed both red and blue miniature LEDs – leaving only green for full colour – and have separately built lenses with 3D optics that resemble the head-up visors used to view movies in 3D.
Parviz has yet to combine both the optics and the LEDs in the same contact lens, but he is confident that even images so close to the eye can be brought into focus. “You won’t necessarily have to shift your focus to see the image generated by the contact lens,” says Parviz. It will just appear in front of you, he says. The LEDs will be arranged in a grid pattern, and should not interfere with normal vision when the display is off.
For Sensimed, the circuitry is entirely around the edge of the lens (see photo). However, both have yet to address the fact that wearing these lenses might make you look like the robots in the Terminator movies. False irises could eventually solve this problem, says Parviz. “But that’s not something at the top of our priority list,” he says.
Source | New Scientist