Archive for October, 2009

Robots of the Future

Saturday, October 31st, 2009

Artist envisions turning fake eye into bionic eye-cam

Saturday, October 31st, 2009

Three years after losing her left eye in a caraccident, San Franciscan Tanya Vlach wants to make her artificial eye more useful: She’s planning to put a video camera in her eye socket with the goal of having a bionic eye.

Vlach, a 35-year-old artist and producer, is just getting started with her project and doesn’t yet have a technology developer yet. She’s actively seeking help with engineering, as well as funding.

Work is already under way in various places that could serve as a starting point for Vlach. For instance, researchers at the University of Washington in Seattle have created a contact lens that contains an electronic circuit and LEDs. And scientists at University of Illinois and Northwestern University, meanwhile, have developed what could be a precursor to a bionic eye, though it’s unclear whether that eye has quite the Web functionality that Vlach is seeking. There’s also Work being done in Boston on embedding chips behind the retina.

Full Story | CNET

Brain scanners can tell what you’re thinking about

Saturday, October 31st, 2009

WHAT are you thinking about? Which memory are you reliving right now? You may think that only you can answer, but by combining brain scans with pattern-detection software, neuroscientists are prying open a window into the human mind.

In the last few years, patterns in brain activity have been used to successfully predict what pictures people are looking at, their location in a virtual environment or a decision they are poised to make. The most recent results show that researchers can now recreate moving images that volunteers are viewing – and even make educated guesses at which event they are remembering.

Last week at the Society for Neuroscience meeting in Chicago, Jack Gallant, a leading “neural decoder” at the University of California, Berkeley, presented one of the field’s most impressive results yet. He and colleague Shinji Nishimoto showed that they could create a crude reproduction of a movie clip that someone was watching just by viewing their brain activity. Others at the same meeting claimed that such neural decoding could be used to read memories and future plans – and even to diagnose eating disorders.

Understandably, such developments are raising concerns about “mind reading” technologies, which might be exploited by advertisers or oppressive governments (see “The risks of open-mindedness”). Yet despite – or perhaps because of – the recent progress in the field, most researchers are wary of calling their work mind-reading. Emphasising its limitations, they call it neural decoding.

They are quick to add that it may lead to powerful benefits, however. These include gaining a better understanding of the brain and improved communication with people who can’t speak or write, such as stroke victims or people with neurodegenerative diseases. There is also excitement over the possibility of being able to visualise something highly graphical that someone healthy, perhaps an artist, is thinking.

So how does neural decoding work? Gallant’s team drew international attention last year by showing that brain imaging could predict which of a group of pictures someone was looking at, based on activity in their visual cortex. But simply decoding still images alone won’t do, says Nishimoto. “Our natural visual experience is more like movies.”

Nishimoto and Gallant started their most recent experiment by showing two lab members 2 hours of video clips culled from DVD trailers, while scanning their brains. A computer program then mapped different patterns of activity in the visual cortex to different visual aspects of the movies such as shape, colour and movement. The program was then fed over 200 days’ worth of YouTube clips, and used the mappings it had gathered from the DVD trailers to predict the brain activity that each YouTube clip would produce in the viewers.

Finally, the same two lab members watched a third, fresh set of clips which were never seen by the computer program, while their brains were scanned. The computer program compared these newly captured brain scans with the patterns of predicted brain activity it had produced from the YouTube clips. For each second of brain scan, it chose the 100 YouTube clips it considered would produce the most similar brain activity – and then merged them. The result was continuous, very blurry footage, corresponding to a crude “brain read-out” of the clip that the person was watching.

In some cases, this was more successful than others. When one lab member was watching a clip of the actor Steve Martin in a white shirt, the computer program produced a clip that looked like a moving, human-shaped smudge, with a white “torso”, but the blob bears little resemblance to Martin, with nothing corresponding to the moustache he was sporting.

Another clip revealed a quirk of Gallant and Nishimoto’s approach: a reconstruction of an aircraft flying directly towards the camera – and so barely seeming to move – with a city skyline in the background omitted the plane but produced something akin to a skyline. That’s because the algorithm is more adept at reading off brain patterns evoked by watching movement than those produced by watching apparently stationary objects.

“It’s going to get a lot better,” says Gallant. The pair plan to improve the reconstruction of movies by providing the program with additional information about the content of the videos.

Team member Thomas Naselaris demonstrated the power of this approach on still images at the conference. For every pixel in a set of images shown to a viewer and used to train the program, researchers indicated whether it was part of a human, an animal, an artificial object or a natural one. The software could then predict where in a new set of images these classes of objects were located, based on brain scans of the picture viewers.

Movies and pictures aren’t the only things that can be discerned from brain activity, however. A team led by Eleanor Maguire and Martin Chadwick at University College London presented results at the Chicago meeting showing that our memory isn’t beyond the reach of brain scanners.

Full Article | New Scientist


 

Generative Behaviors | Node Records | StillSteam

Tuesday, October 20th, 2009

generative_behaviors.jpg

Tune into Node Radio  on Monday October 26th, 2009 10-12am CST
for a special show which will air Generative Behaviors in its entirety.

allambient.jpg
Node Radio
streams on the ambient radio station StillStream 24/7.

StillStream Radio: http://www.stillstream.com/

Node Records: http://noderecords.com/

Generative Behaviors

Generative sound by definition is music that is ever-changing and that is created by a system. The notion that a system –in the case of these works a laptop computer– can not only compute but create a musical production is fascinating; the language of the computer [binary] could be seen as elaborate enough to produce creativity out of zeros and ones. If this is so, the possibilities of who and what will be considered the artist of the future enters new territory as we embark upon the 21st Century. With technological discovery and creation accelerating, it will be increasingly important for us as we move to the future to embrace all creations as valid pursuits.  Out of a continuining emphasis among artists on digital modes of articulation, fundamental changes in the way we see and understand art and music are nearly inevitable.

-Patrick Millard

UAVM | Byte Scrapers

Tuesday, October 20th, 2009

byte_scrapers2.jpg

BYTE SCRAPERS
3 Oct. – 26 Dec. 09

The cyberspace. A consensual hallucination, experienced daily by billions of legitimate operators, in all nations, by children who are being taught mathematical concepts. A graphic representation of data disregarding the banks of all the computers of the human system. An unthinkable complexity.  Lines of light not aligned in the space of mind, and nebulae constellations of data. As the city lights …
-William Gibson in Neuromancer (1984)

The dimension is cyberspace as a world inhabited by bodies consisting of intangible digital ‘cells’ with the matrix form of hypertext, can work with the body of a smart system that constantly readjust according to the information supplied to it. The liquidity associated with the solvency of the architectural presence materials of the building projects in these webs of platforms for digital networks that mimic the architecture of human neural networks through nodes and links, storing these nodules’ digital cities’, with the mapping linking the speed of light gigabytes of information. These ‘fragments’ may contain bytes in the form of cathedrals, cities never seen, records topographic surface of distant planets, revisitações of sacred spaces disappeared and visited by surfers planning journeys in the form of authentic ocean shipments of complexity …
-Hugo Ferrão Architecture in Cyberspace and the ‘non places’ inhabited by “men without qualities”.

Byte scrapers.
Digital sky scrapers.Speaking about digital is talking about “digital space”, systems and forms that build networks. Is internet and the whole of their networks a huge universe of constellations, with real worlds, continents, oceans, cities and sky scrapers? How do artists see these worlds? As a mere tangle of wires and cables that physically connect computers by land and air, or a series of highways where vehicles travel at the speed of light carrying packets of information? And what is in the centers of these roads? Stations, simple issue / receipt or many virtual cities inhabited by beings?

That is the challenge proposed by UAVM Museum: to represent the hidden world of the Internet in a network that extends endlessly around the world at a speed unstoppable.

UAVM | Byte Scrapers

Out of your head: Leaving the body behind

Wednesday, October 14th, 2009

THE young man woke feeling dizzy. He got up and turned around, only to see himself still lying in bed. He shouted at his sleeping body, shook it, and jumped on it. The next thing he knew he was lying down again, but now seeing himself standing by the bed and shaking his sleeping body. Stricken with fear, he jumped out of the window. His room was on the third floor. He was found later, badly injured.

What this 21-year-old had just experienced was an out-of-body experience, one of the most peculiar states of consciousness. It was probably triggered by his epilepsy (Journal of Neurology, Neurosurgery and Psychiatry, vol 57, p 838). “He didn’t want to commit suicide,” says Peter Brugger, the young man’s neuropsychologist at University Hospital Zurich in Switzerland. “He jumped to find a match between body and self. He must have been having a seizure.”

In the 15 years since that dramatic incident, Brugger and others have come a long way towards understanding out-of-body experiences. They have narrowed down the cause to malfunctions in a specific brain area and are now working out how these lead to the almost supernatural experience of leaving your own body and observing it from afar. They are also using out-of-body experiences to tackle a long-standing problem: how we create and maintain a sense of self.

Dramatised to great effect by such authors as Dostoevsky, Wilde, de Maupassant and Poe – some of whom wrote from first-hand knowledge – out-of-body experiences are usually associated with epilepsy, migraines, strokes, brain tumours, drug use and even near-death experiences. It is clear, though, that people with no obvious neurological disorders can have an out-of-body experience. By some estimates, about 5 per cent of healthy people have one at some point in their lives.

Full Article | New Scientist Life

Transhumanism Mainframe Computer

Sunday, October 11th, 2009

Transhumanist Problems?

Sunday, October 11th, 2009

Virtualization of the Universe

Sunday, October 11th, 2009