Archive for the ‘Uncategorized’ Category

Formatting Gaia + Technological Symbiosis

Friday, December 2nd, 2011

Patrick Millard | Formatting Gaia + Technological Symbiosis from vasa on Vimeo.

The Biological Canvas

Tuesday, July 19th, 2011

Curatorial Statement

The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel.  Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter.  Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.

There are reasons for this refocusing of the lens or eye toward body.  Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago.  The body truly is changing, both biologically and technologically, at an abrupt rate.  Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation.  Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today.  They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves.  The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.

We see in the work of Orlan and Stelarc that the body becomes the malleable canvas.  Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution.  In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body.  Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings.  Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world.  Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses.  Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body.  Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth.  Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.

Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine.  Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance.  Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes.  Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.

The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind.  In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order.  Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing.  Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations.  Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world.  The installation includes an array of visual stimulation in a dark environment.  As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes.  This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.

Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs.  By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement.  Here we see the body acting as brush; literally painting the soundscape through its own creative motion.  As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body.  For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.

Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense.  Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with.  Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state.  The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide.  Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time.  As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge.  Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self.  Can we begin to mold and sculpt the body much like we have done with the land we inhabit?

There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori.  The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement.  In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities.  Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain.  As you browse the work of The Biological Canvas I would like to invite your own imagination to engage.  Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.

Patrick Millard

Source | VASA Project

Bionic glasses for poor vision

Monday, July 11th, 2011

Bionic glasses using video cameras, position detectors, facial recognition and tracking software, and depth sensors have been developed by Oxford University researchers.

“We want to be able to enhance vision in those who’ve lost it or who have little left or almost none,” explains Dr Stephen Hicks of the Department of Clinical Neurology at Oxford University. “The glasses should allow people to be more independent — finding their own directions and signposts, and spotting warning signals,” he says.

The glasses would be appropriate for common types of visual impairment, such as age-related macular degeneration and diabetic retinopathy. (NHS Choices estimates around 30% of people who are over 75 have early signs of age-related macular degeneration, and about 7% have more advanced forms.)

Bionic glasses include video cameras, position detectors, face recognition and tracking software, and depth sensors


The researchers plan to convert newspaper headlines using optical character recognition into audible words. Barcode and price tag readers could also be useful additions, the researchers said.

The bionic glasses are being exhibited at this year’s Royal Society Summer Science Exhibition.

Source | Kurzweil AI

Nanomagnet memory and logic could achieve ultimate energy efficiency

Monday, July 11th, 2011

Future computers may rely on magnetic microprocessors that consume the least amount of energy allowed by the laws of physics, researchers at the University of California, Berkeley, have determined.

The researchers used nanomagnets to build magnetic memory and logic devices about 100 nanometers wide and about 200 nanometers long.

Because they have the same north-south polarity as a bar magnet, the up-or-down orientation of the pole can be used to represent the 0 and 1 of binary computer memory.

When multiple nanomagnets are brought together, their north and south poles interact via dipole-dipole forces to exhibit transistor behavior, allowing simple logic operations.

The bright spots are nanomagnets with their north ends pointing down (represented by red bar below) and the dark spots are north-up nanomagnets (blue). The six nanomagnets form a majority logic gate transistor, where the output on the right of the center bar is determined by the majority of three inputs on the top, left and bottom.

Such devices would dissipate only 18 millielectron volts of energy per operation at room temperature, the minimum allowed by the second law of thermodynamics, the Landauer limit. That’s 1 million times less energy per operation than consumed by today’s computers, the researchers said.

Source | Kurzweil AI

How many objects can you hold in mind simultaneously?

Wednesday, July 6th, 2011

Neuroscientists at MIT’s Picower Institute for Learning and Memory have found that cognitive capacity limitations (the ability to hold about four things in our minds at once) reflect a dual model of working memory.

The researchers investigated the neural basis of this capacity limitation in two monkeys performing the same test used to explore working memory in humans. First, the researchers displayed an array of two to five colored squares, then a blank screen, and then the same array in which one of the squares changed color. The task was to detect this change and look at the changed square.

As the monkeys performed this task, the researchers recorded simultaneously from neurons in two brain areas related to encoding visual perceptions (the parietal cortex) and holding them in mind (the prefrontal cortex). As expected, the more squares in the array, the worse the performance.

They determined that monkeys (and by extension humans) do not have a capacity of four objects, but of two in each hemisphere. If the object to remember appears on the right side of the visual space, it does not matter how many objects are on the left side; as long as the right side contains only two, the monkeys can easily remember an object on the right side. Or if the right side contains three objects and the left side only one, their capacity for remembering the key object on the right is exceeded and so they may forget it.

The fact that we have different capacities in each hemisphere implies that we should present information (for example, in heads-up displays) in a way that does not overtax one hemisphere while under-taxing the other, the researchers said.

Source | Kurzweil AI

Mass-producing stem-cells for stem cells for diagnostic and therapeutic applications

Saturday, June 18th, 2011

Todd McDevitt at the Georgia Institute of Technology and colleagues have found that adding biomaterials such as gelatin into clumps of stem cells (called “embryoid bodies”) affected stem-cell differentiation without harming the cells.

By incorporating magnetic particles into the biomaterials, they could control the locations of the embryoid bodies and how they assemble with one another.

Compared to typical delivery methods, providing differentiation factors — retinoic acid, bone morphogenetic protein 4 (BMP4) and vascular endothelial growth factor (VEGF) — via microparticles induced changes in the gene and protein expression patterns of the aggregates.

In the future, these new methods could be used to develop manufacturing procedures for producing large quantities of stem cells for diagnostic and therapeutic applications.

Source | Kurzweilai

Artificial hippocampal system restores long-term memory, enhances cognition

Saturday, June 18th, 2011

Theodore Berger and his team at the USC Viterbi School of Engineering’s Department of Biomedical Engineering have developed a neural prosthesis for rats that is able to restore their ability to form long-term memories after they had been pharmacologically blocked.

In a dramatic demonstration, Berger blocked the ability to rats to form long-term memories by using pharmacological agents to disrupt the neural circuitry that communicates between two subregions of the hippocampus, CA1 and CA3, which interact to create long-term memory, prior research has shown.

The rats were unable to remember which lever to pull to gain a reward, or could only remember for 5–10 seconds, when previously they could remember for a long period of time.

The researchers then developed an artificial hippocampal system that could duplicate the pattern of interaction between CA3-CA1 interactions. Long-term memory capability returned to the pharmacologically blocked rats when the team activated the electronic device programmed to duplicate the memory-encoding function.

The researchers went on to show that if a prosthetic device and its associated electrodes were implanted in animals with a normal, functioning hippocampus, the device could actually strengthen the memory being generated internally in the brain and enhance the memory capability of normal rats.

“These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes,” says the paper.

Next steps, according to Berger and Deadwyler, will be attempts to duplicate the rat results in primates (monkeys), with the aim of eventually creating prostheses that might help human victims of Alzheimer’s disease, stroke, or injury recover function.

Artificial hippocampal system restores long-term memory capability

Source | Kurzweilai

The Robot Family Portrait Album |

Monday, June 13th, 2011

Kismet, Daguerreotype. 2011

Patrick Millard has taken on the task of creating the first comprehensive robot portrait project.
Using the first ever photographic process, the daguerreotype, he will create a Family Portrait Album so that robots can one day look back upon their ancestral history.
With roboticist Heather Knight consulting on the project, an integral component of human-robotic interaction will be stressed.

Google, Microsoft, and Yahoo Team Up to Advance Semantic Web

Friday, June 10th, 2011

Google, Microsoft, and Yahoo have teamed up to encourage Web page operators to make the meaning of their pages understandable to search engines.

The move may finally encourage widespread use of technology that makes online information as comprehensible to computers as it is to humans. If the effort works, the result will be not only better search results, but also a wave of other intelligent apps and services able to understand online information almost as well as we do.

The three big Web companies launched the initiative, known as Schema.org, last week. It defines an interconnected vocabulary of terms that can be added to the HTML markup of a Web page to communicate the meaning of concepts on the page. A location referred to in text could be defined as a courthouse, which Schema.org understands as being a specific type of government building. People and events can also be defined, as can attributes like distance, mass, or duration. This data will allow search engines to better understand how useful a page may be for a given search query—for example, by making it clear that a page is about the headquarters of the U.S. Department of Defense, not five-sided regular shapes.

The move represents a major advance in a campaign initiated in 2001 by Tim Berners-Lee, the inventor of the Web, to enable software to access the meaning of online content—a vision known as the “semantic Web.” Although the technology to do so exists, progress has been slow because there have been few reasons for Web page operators to add the extra markup.

Schema.org  may change that, saysDennis McCleod, who works on semantic Web technology at the University of Southern California. By tagging information, Web page owners could improve the position of their site in search results—an  important source of traffic. “This will motivate people to actually add semantic data to their pages,” says McCleod. “It’s always hard to predict what will be adopted, but generally, unless there’s something in it for people, they won’t do it. Google, Microsoft, and Yahoo have given people a strong reason.”

The Schema.org approach is modeled on one of the more straightforward methods of describing the meaning of a Web page’s contents. “The trouble with many of these techniques is, they are really hard to use,” says McCleod. “One of the encouraging things about Schema.org is that they are pursuing this at a level that is quite usable, so it is much easier to mark up your website.”

Web of words: This graph of linked phrases lets software understand the meaning of online content. The system is backed by Google, Microsoft and Yahoo.

Source | Technology Review

Is “Self-tracking” the Secret to Living Better?

Friday, June 10th, 2011

Do you know the number of miles you’ve driven over the last five years? Every meal you’ve eaten? The number of browser tabs you’ve had open during the day compared with the amount of sleep you had that night? That’s the kind of data collected by the new generation of self-trackers who descended on the Computer History Museum, in Mountain View, California, for the first annual Quantified Self conference over Memorial Day weekend.

About 400 hackers, programmers, entrepreneurs and health professionals came from across the globe, united by a desire to collect as much data as possible about themselves in order to make informed decisions regarding health, productivity and happiness. (One participant had logged X-rated information on the number of his sexual partners and duration of sexual activities. He went to a session on data visualization looking for an interesting way to illustrate that data.)

The self-tracking movement, which has sprung to life over just the last couple of years, is enabled in large part by both wireless sensing devices and smart phones. Many people already employ smart phone apps to track food intake and fitness, but a new generation of apps also tracks mood, meditation, migraines and other factors.

Beyond the smart phone, low power wireless transmitters are transforming existing objects, such as scales and pedometers, making tracking both effortless and easy to share. A Wi-fi enabled scale automatically tracks your weight and will even tweet the numbers—for those lucky few who really want to share.

Several commercial wearable monitors, such as fitbit and Bodymedia, employ accelerometers to track the wearer’s movement, pairing that with specialized algorithms to calculate calories burned. Data is automatically uploaded to the internet, allowing users to track their progress and compete against each other for the most steps or highest activity levels.

One of my favorite projects was a proposal from Kyle Machulis, robotics engineer and self-described hacker, to figure out what makes programmers write bad code. By tracking programmers as they code, monitoring their computers, chairs, keyboards and perhaps the programmer herself via computer cameras, “then you could look at what was happening when they wrote a bug and see if that happens with other bugs,” he says. Or you could chart the parts of a program that appear to be the least user-friendly, perhaps when users fidget, and see if there was some kind of predictive behavior on the programmer’s part.

While the self-tracking trend is still largely limited to early adopters—technophiles, elite athletes and patients monitoring chronic conditions—the diversity of attendees at the conference highlights just how fast it’s moving into the mainstream.

In one breakout session, a group earnestly discussed the best approaches to self-experimentation and the results of some rather odd experiments: standing on one leg for eight minutes a day leads to better sleep; and eating butter for a better performance on a test of cognitive function.

On the other side of the museum, Ben Rubin, co-founder of Zeo, a start-up that sells a consumer sleep-monitoring device, led a discussion on the best business models for the field. And last but not least, healthcare providers and entrepreneurs discussed the best ways to try to bring these tools into medicine.

While the sessions focused on medicine were the smallest at the meeting, there are signs that self-tracking is catching the interest of mainstream healthcare. Humana, a major insurer, had several attendees, as did the Robert Wood Johnson Foundation, the largest healthcare centered non-profit in the country. The latter gave a grant to help the Quantified Self organization compile an online guide to self-tracking tools, with the aim of helping the movement spread.

As of Thursday, the guide listed 432 tools. Here’s a smattering;

Equanimity a mediation timer and tracker.

Quantter, a web site where you can track your daily activities using Twitter.

MoodScope, web based application for measuring, tracking and sharing your mood.

Withings Wifi Bodyscale, a digital wireless body fat monitor and scale.

Philips DirectLife, a set of activity programs aimed at increasing fitness.

DailyFeats; DailyFeats is a web app designed to reward users for good “feats” by awarding points, badges, and real world savings.

Source | Technology Review

Born to be Viral: Pool-playing robot rivals humans

Friday, June 10th, 2011

Pool sharks, beware: a new robot will give you a run for your money. It’s definitely not the fastest player but it completed 400 shots with an 80 per cent success rate.

The robot, designed by Thomas Nierhoff, Omiros Kourakos and Sandra Hirche at the Technical University of Munich, Germany, has two arms that can move in seven different ways. Cameras mounted above the table track the position of the balls and cue, and feed this information to the robot’s computers. It can then decide on the best move and calculate how the arms should be oriented to complete the stroke. To get into position, it rolls around the table using predetermined coordinates.

The system doesn’t only help the robot, it can also assist human players. The ceiling camera is hooked up to a projector that overlays information about how to hit each ball to successfully sink it.

Nierhoff’s robot isn’t the first to play pool and it may soon have to prove itself. Last year, Silicon Valley robotics firm Willow Garage released a video of its PR2 robot having a go. Perhaps it’s time for a competition?

The Munich machine was presented at the International Conference for Robotics and Automation last month. For other playful robots, check out these acrobatic flying jugglers and this ball-catching humanoid.

Source | NewScientist

NIST ‘catch and release’ program could improve nanoparticle safety assessment

Thursday, June 9th, 2011

Scientists at the National Institute of Standards and Technology (NIST) have found a way to manipulate nanoparticles to assess nanoparticle toxicity.

They scientists developed a method of attracting and capturing metal-based nanoparticles on a surface and releasing them at the desired moment.

The method uses a mild electric current to influence the particles’ behavior, which could allow scientists to expose cell cultures to nanoparticles so that any lurking hazards they might cause to living cells can be assessed effectively.

The team used a gold surface covered by long, positively charged molecules, which stretch up from the gold. The nanoparticles, which are also made of gold, were coated with citrate molecules that have a slight negative charge, which draws them to the surface covering, an attraction that can be broken with a slight electric current.

The scientists said the method also has the advantage of collecting the particles in a layer only one particle thick, which allows them to be evenly dispersed into a fluid sample, thereby reducing clumping — a common problem that can mask the properties they exhibit when they encounter living tissue.

After gold nanoparticles are trapped on the brown collection surface (left), the NIST team can apply a mild electric field and release most of them (right) (credit: NIST)

Source | Kurzweilai.net

Humanity+ @ Parsons The New School For Design, Transhumanism Meets Design

Tuesday, April 26th, 2011

Patrick Millard | Formatting Gaia + Embodiment

Tuesday, April 12th, 2011

Now I See You

Friday, November 19th, 2010

Weill Cornell Medical College researchers have built a new type of prosthetic retina that enabled blind mice to see nearly normal images. It could someday restore detailed sight to the millions of people who’ve lost their vision to retinal disease.

They used optogenetics, a recently developed technique that infuses neurons with light-sensitive proteins from blue-green algae, causing them to fire when exposed to light.

The researchers used mice that were genetically engineered to express one of these proteins, channelrhodopsin, in their ganglion cells. Then, they presented the mice with an image that had been translated into a grid of 6,000 pulsing lights. Each light communicated with a single ganglion cell, and each pulse of light caused its corresponding cell to fire, thus transmitting the encoded image along to the brain.

In humans, such a setup would require a pair of high-tech spectacles, embedded in which would be a tiny camera, an encoder chip to translate images from the camera into the retinal code, and a miniature array of thousands of lights. When each light pulsed, it would trigger a channelrhodopsin-laden ganglion cell. Surgery would no longer be required to implant an electron array deep into the eye, although some form of gene therapy would be required in order for patients to express channelrhodopsin in their retinas.

Source | Cornell Medical College