Archive for October, 2010

Web-Based Creativity: Can Working in Virtual Communities Be More Effective Than Face-to-Face Cooperation?

Thursday, October 28th, 2010

Common sense and experience would suggest that people are more creative when they work together in a face-to-face environment. But, as remote working and online interactions become more and more commonplace, there is growing evidence that working in virtual communities and using online tools together can be even more effective in some areas than face-to-face cooperation.

Piet Kommers of the University of Twente, in The Netherlands, is a specialist in advanced learning tools such as concept mapping, virtual reality and mobile learning, and has focused much of his research on trying to eradicate preconceptions about learning models and scepticism about how members of online networks interact. Writing in the International Journal of Web-based Communities Kommers answer the question, “How can if virtual participation contribute to creative solutions?” in answering the question he emphasises once again that in the so-called web 2.0 era, the only way for movements and even whole industries to survive is to make the user a co-creator.

Communications that are not face-to-face, whether they involve commenting on a web blog, using a chat client, debating on a forum, or even attempting to get help from a call centre, all involve some kind of transience. The people involved may be anonymous, they may be disguising their identity or simply not revealing their true location or intentions. However, virtual meetings have the unique opportunity of bringing together like-minded or even dissimilar people who would never normally meet in the “offline” world and so open up endless possibilities for collaboration, learning and creativity.

In his research, Kommers hopes to reveal how web 2.0 can emphasise such opportunities by linking people in new ways and creating larger than life social and working networks. As web etiquette evolves over the coming years it will, he suggests, move from the precocious experimentation to fully fledged participation. Tools, such as chat, which are built into the likes of well-known online networks like LinkedIn and Facebook, as well as in web link sharing tools, such as Iosurf and Delicious, make it easier to extrapolate one’s web and email behaviour into establishing a social network.

“The emergence of web-based communities has revitalises us to consider social problems as issues for social participation and for social creativity,” Kommers says. He adds that, “There are now real prospects for online communities to promote human values such as cooperation, altruism, open-mindedness and tolerance.”

Source | Science Daily

Kepler Spacecraft Takes Pulse of Distant Stars: ‘Starquakes’ Yield New Insights About the Size, Age and Evolution of Stars

Thursday, October 28th, 2010

An international cadre of scientists that used data from NASA’s Kepler spacecraft announced Tuesday the detection of stellar oscillations, or “starquakes,” that yield new insights about the size, age and evolution of stars.

The results were presented at a news conference at Aarhus University in Denmark by scientists representing the Kepler Asteroseismic Science Consortium (KASC). The team studied thousands of stars observed by Kepler, releasing what amounts to a roster of some of humanity’s most well-characterized stars.

Analysis of stellar oscillations is similar to how seismologists study earthquakes to probe the Earth’s interior. This branch of science, called astroseismology, produces measurements of stars the Kepler science team is anxious to have.

“Using the unparalleled data provided by Kepler, KASC scientists are quite literally revolutionizing our understanding of stars and their structures,” said Douglas Hudgins, Kepler Program Scientist at NASA Headquarters in Washington. “What’s more, they are doing so at no cost to the American taxpayer. All the KASC scientists are supported by research funding from their home countries. It is a perfect illustration of the tremendous value that our international partners bring to NASA missions.”

In the results presented Tuesday, one oscillating star took center stage: KIC 11026764 has the most accurately known properties of any star in the Kepler field. In fact, few stars in the universe are known to similar accuracy. At an age of 5.94 billion years, it has grown to a little over twice the diameter of the sun and will continue to do so as it transforms into a red giant. The oscillations reveal that this star is powered by hydrogen fusion in a thin shell around a helium-rich core.

“We are just about to enter a new area in stellar astrophysics,” said Thomas Kallinger, lead author on a study of red giant stars and postdoctoral fellow at the Universities of British Columbia and Vienna. “Kepler provides us with data of such good quality that they will change our view of how stars work in detail.”

KASC scientists also reported on the star RR Lyrae. It has been studied for more than 100 years as the first member of an important class of stars used to measure cosmological distances. The brightness, or light wave amplitude, of the star oscillates within a well-known period of about 13.5 hours. Yet during that period, other small cyclic changes in amplitude occur — behavior known as the Blazhko effect. The effect has puzzled astronomers for decades, but thanks to Kepler data, scientists may have a clue as to its origin. Kepler observations revealed an additional oscillation period that had never been previously detected. The oscillation occurs with a time scale twice as long as the 13.5-hour period. The Kepler data indicates the doubling is linked to the Blazhko effect.

“Kepler data ultimately will give us a better understanding of the future of our sun and the evolution of our galaxy as a whole,” said Daniel Huber, lead author on one of the KASC studies.

Launched in March 2009, Kepler was designed to discover Earth-size planets orbiting other stars. The spacecraft uses a huge digital camera, known as a photometer, to continuously monitor the brightness of more than 150,000 stars in its field of view as it orbits around the sun. Kepler searches for distant worlds by looking for “transits,” when a planet passes in front of a star, briefly causing it to dim. The amount of dimming reveals the size of the planet compared to the size of the star.

Source | Science Daily

The Nature of The Identity – with Reference to Androids

Wednesday, October 27th, 2010
Written By: Alexander MacRae
Date Published: October 20, 2010

The nature of the identity is intimately related to information and information processing.  The importance and the real nature of information is only now being gradually realized.  But the history of the subject goes back a long way.

In ancient Greece, those who studied Nature — the predecessors of our scientists —considered that what they studied, material reality, had two aspects: form and substance.

Until recent times all the emphasis was on substance. Which substance(s) subjected to sufficient stress would transmute into gold; which substances in combination could be triggered into releasing vast amounts of energy.  Money and weapons — the usual Homo Sap stuff.

You take a block of marble — that is substance.  You have a sculptor create a beautiful statue from it — that is form.

The form consists of the shapes imposed by the sculptor; the shapes consist of information.  Now, if you were an unfeeling materialistic bastard you could describe the shapes in terms of equations.  And if you were an utterly depraved unfeeling materialistic bastard you could have a computer compare the sets of equations from many examples to find out what is considered to be beauty.

Dr Foxglove — the Great Maestro of Leipzig — is seated at the concert grand playing on a Steinway (of course) with great verve (as one would expect).  In front of him, under a low light, there is a sheet of paper with black marks — information of some kind — the music for Chopin’s Nocturne Op. 9, no. 2.

Aah!  Wonderful.  Sublime… But all is not as it seems.  Herr Doktor Foxglove thinks he is playing music.  A grand illusion my friend!  You see, the music — it is, how you say — all in the heads of the listeners.

What the Good Doktor is doing, and doing manfully, is operating a wooden acoustic wave generator — albeit very skillfully.  And not just any old wooden acoustic-wave generator,  but a Steinway wooden acoustic wave generator.

There is no music in the physical world.  The acoustic waves are not music.  They are just pressure waves in the atmosphere.  The pressure waves actuate the eardrum.  And that, in turn, actuates a part of the inner ear called the cochlea.  And that, in turn, causes streams of neural impulses to progress up into the higher brain.

Dr Foxglove hits a key on the piano corresponding to 440 acoustic waves per second.  This is replicated in a slightly different form within the inner ear, until it becomes a stream of neural impulses.

But what the listener hears is not 440 waves or 440 neural impulses or 440 anything.  What the listener hears is one thing — a single tone.

The tone is an exact derivative of the pattern of neural impulses.  There are no tones in physical reality.  Tones exist only in the experience of the listener — only in the experience of the observer.

And thanks to some fancy processing, not only will the listener get the illusion that 440 cycles per second is actually a “tone”, but a further illusion is perpetrated that the tone is coming from a particular direction, that what one is hearing is Dr. Foxglove at the Steinway, over there, under the lights… that is where the sound is.

But no, my friend…

What the listener is actually listening to is his eardrums.  He is listening to a derivative of a derivative … of his eardrums rattling.  His eardrums are rattling because someone is operating an acoustic wave generator in the vicinity.  But what he is hearing is pure information.

And as for the music…

A single note — a tone — is neither harmonious nor disharmonious in itself.  It is only harmonious or disharmonious in relation to another note.  Music is derived from ratios — a still further derivative — and ratios are pure information.

Take for example the ratio of 20 Kg to 10 Kg.  The ratio of 20 Kg to 10 Kg is not 2 Kg. The ratio of 20 Kg to 10 Kg is 2 — just 2 — pure information. 20 kg/10 kg = 2.

Similarly, we can also show that there is no color in reality.  There are no shapes in reality.  Depth perception is a derivative; just as what one is listening to is the rattling of one’s eardrums, what one is watching is the inside of one’s eyeballs.  One is watching the shuddering impact of photons on one’s retina.

The sensations of sound, of light and color and shapes are all in one’s mind as decodings of neural messages, which, in turn, are derivatives of physical processes. The wonderful aroma coming from the barbecue is all in one’s head.  There are no aromas or tastes in reality.  All are conjurations of the mind.

Like the Old Guy said, all is maya, baby.

The only point that is being made here is that information is too important a subject to be so neglected.  What you are doing here is at the leading edge beyond the leading edge — and in that future information will be a significant factor.

What we called Information Technology (IT) way back in the dim, distant and bewildered early 21st century  will be seen as Computer Technology (CT), which is all it ever was… but there will be a real IT in the future.

Similarly, what has been referred to for too long as Information Science will be seen for what it is: Library Technology.

Now, down to work.

One of the options — the android — is to upload all stored data from a smelly old bio body to a cool Designer Body (DB).  This strategy is based on the unproven but popular belief that one’s identity is contained by one’s memory.

There are two critical points that need to be addressed.  The observer is the cameraman,  not the picture.  Unless you are looking in a mirror or at a film of yourself, then you are the one person who will not appear in your memory.

There will be memories of that favorite holiday place, of your favorite tunes, of the emotions that you felt when… but you will only “appear” in your memories as the point of observation.

You are the cameraman, not the picture.

So, we should view with skepticism the idea that uploaded memory will contain the identity.

If somebody loses their memory, they do not become someone else — hopping and skipping down the street.

‘Hi – I’m Tad Furlong, I’m new in town….’

If somebody loses their memory, they may well say, ‘I do not know my name…’

That doesn’t mean they have become someone else.  What they mean is ‘I cannot remember my name.’  The fact that this perplexes them indicates that it is still the same person.  It is someone who has lost their name.

If a person changes their name, they do not become someone else and nor do they become someone else if they can’t remember their name — or as it is more commonly, more dramatically, and more loosely put, “can’t remember who they are”.

So, what is the identity?

There is the observer — whatever that is — and there are observations.

There are different forms of information — visual, audible, tactile, olfactory — which together form the environment of the observer.  By “projection,” the environment is observed as being external.  The visual image from one eye is compared with that of the other eye to give depth perception.  The sound from one ear is compared with that from the other ear to give surround sound.  You are touched on the arm and immediately the tactile sensation, which actually occurs in the mind, is mapped as though coming from that exact spot on your arm.

You live and have your being in a world of sensation.

This is not to say that the external world does not exist, only that our world is the world “inside”, the place where we hear, see, feel, and taste.

And all those projections are like “vectors” leading out from a projection spot (a locus of projection, the 0,0 spot), the point that is me seeing and me tasting and me hearing and me scenting even though, through the magic of projection, I have the idea that the barbecue smells, that there is music in the piano, that the world is full of color, and that my feet feel cold.

This locus of projection is the “me”; it is the point of observation, the 0,0 reference point.  This, the observer and not the observation, is the identity, the me, the 0,0.

And that 0,0 may be a lot easier to shift than a ton and a half of squashed memories.  Memories of being sick, of being tired, of the garden, of your dog, of the sound of chalk on the blackboard, of the humorless assistant bank manager, of the 1982 Olympics, of Sadie Trenton, of Fred’s tow bar, and so on.

So, if memory ain’t the thing, how do we do it — upload the identity?

This is the first in a series of articles on The Nature of Identity — with Reference to Androids

Source | H+ Magazine

A $1.50 Lens-Free Microscope

Tuesday, October 26th, 2010

Using a $1.50 digital camera sensor, scientists at Caltech have created the simplest and cheapest lens-free microscope yet. Such a device could have many applications, including helping diagnose disease in the developing world, and enabling rapid screening of new drugs.

The best current way to diagnose malaria is for a skilled technician to examine blood samples using a conventional optical microscope. But this is impractical in parts of the world where malaria is common. A simple lens-free imaging device connected to a smart phone or a PDA could automatically diagnose disease. A lensless microscope could also be used for rapid cancer or drug screening, with dozens or hundreds of microscopes working simultaneously.

The Caltech device is remarkably simple. A system of microscopic channels called microfluidics lead a sample across the light-sensing chip, which snaps images in rapid succession as the sample passes across. Unlike previous iterations, there are no other parts. Earlier versions featured pinhole apertures and an electrokinetic drive for moving cells in a fixed orientation with an electric field. In the new device, this complexity is eliminated thanks to a clever design and more sophisticated software algorithms. Samples flow through the channel because of a tiny difference in pressure from one end of the chip to the other. The device’s makers call it a subpixel resolving optofluidic microscope, or SROFM.

“The advantage here is that it’s simpler than their previous approaches,” says David Erickson, a microfluidics expert at Cornell University.

Cells tend to roll end over end as they pass through a microfluidic channel. The new device uses this behavior to its advantage by capturing images and producing a video. By imaging a cell from every angle, a clinician can determine its volume, which can be useful when looking for cancer cells, for example. Changhuei Yang, who leads the lab where the microscope was developed, says this means samples, such as blood, do not have to be prepared on slides beforehand.

The current resolution of the SROFM is 0.75 microns, which is comparable to a light microscope at 20 times magnification, says Guoan Zheng, lead author of a recent paper on the work, published in the journal Lab on a Chip.

The sensor has pixels that are 3.2 microns on each side. A “super resolution” algorithm assembles multiple images (50 for each high-resolution image) to create an enhanced resolution image–as if the screen had pixels 0.32 microns in size. However, super-resolution techniques can only distinguish features that are separated by at least one pixel, meaning the final resolution must be at least twice the pixel size. This is why a .32 micron pixel size yields only a resolution of .75 microns.

Zheng’s technique uses only a small portion of the chip, allowing him to capture cells at a relatively high frame rate of 300 frames per second. This yields a super-resolution “movie” of a cells at six frames per second.

Using a higher-resolution CMOS sensor should allow an even better ultimate resolution, says Seung Ah Lee, another collaborator on the project. Lee wants to get the resolution up to the equivalent of 40x magnification, so that the technique can be used for diagnosis of malaria via automated recognition of abnormal blood cells.

Aydogan Ozcan, a professor at UCLA who is developing a competing approach, says that Zheng’s work is “a valuable advance for optofluidic microscopy,” in that this system is simpler, offers higher resolution, and is easier to use than previous microscopes. However, Ozcan says that the technique has limitations.

The microfluidic channel must be quite small, says Ozcan, which means the approach can’t be applied to particles that might vary greatly in size, and the channel must be built to accommodate the largest particle that might flow through it. Ozcan’s own lensless microscope does not use microfluidic channels, and instead captures a “hologram” of the sample by interpreting the interference pattern of an LED lamp shining through it. This method has no such limitations.

“From my perspective, these are complementary approaches,” says Ozcan, whose ultimate aim is cheap, cell-phone based medical diagnostic tools for the developing world.

Source | Technology Review

Nissan Solar Tree

Sunday, October 24th, 2010

At the CEATEC Japan 2010 trade show now being held in Chiba (Oct 5-9), Nissan is exhibiting a futuristic model of a solar-powered wireless charging station for electric vehicles.

The envisioned tree-shaped charging station — called the “Solar Tree” — stands 12 meters (39 ft) tall and has three translucent round solar panels that follow the sun across the sky. With an expected conversion efficiency of 30%, the three solar panels together can generate 20 kilowatts of power. At the base of each tree is a clover leaf-shaped wireless charging pad that can recharge batteries from a short distance, without the use of cables or plugs.

As part of the exhibition, Nissan showed off the latest version of its EPORO robot car, which has been outfitted with a wireless power system. In addition to recharging itself under a Solar Tree, the robot can also repower itself on the go by receiving electrical energy via charging lanes on the road.

Solar Trees can be used individually as small-scale charging stations in urban areas, or they can be grouped into forests to produce energy on the scale of power plants. According to Nissan’s design, a forest of 1,000 Solar Trees will be able to provide electricity for 7,000 households.

In addition to providing power, Solar Trees can provide some relief from the heat in summer. The translucent solar panels offer protection from UV light, while fine mist emitted from the edges of the panels works to reduce the temperature in the immediate vicinity.

Source | Pink Tentacle

Dance of the HRP-4C Cybernetic Human

Sunday, October 24th, 2010

Visitors to the Digital Content Expo in Tokyo last weekend were treated to a choreographed dance routine featuring AIST’s feminine HRP-4C robot and four humans.

The performance, called “Dance Robot LIVE! – HRP-4C Cybernetic Human,” is the culmination of a year-long effort to teach the humanoid to dance. The routine was produced by renowned dancer/choreographer SAM-san (a member of the popular music group TRF who has worked with numerous well-known artists like SMAP and BoA), and the lip-synced song is a Vocaloid version of “Deatta Koro no Yō ni” by Kaori Mochida (Every Little Thing).





Source | Pink Tentacle

Hundred Year Starship: An Apollo-like Push to the Stars?

Sunday, October 24th, 2010

“We choose to go to the moon,” President Kennedy famously said in 1962. Today, in 2010, NASA Ames Director Simon “Pete” Worden says let’s go to the stars.

But, to get there, let’s go to the moons of Mars first, said Worden as he announced a DARPA-funded NASA Ames program to kick start a “hundred year starship” program with $1 million seed money and $100K from NASA. The announcement didn’t come from the White House, however. Worden revealed the new program at San Francisco Bay Area’s Long Conversation, an “epic relay” of one-to-one conversations in conjunction with a performance of the Zen-like Longplayer, a 1,000 year long musical composition that has been playing since 1999.

While the initial investment is not likely to result in a new Apollo-like program to send a human to Alpha Centari in ten (or even a hundred) years, it just might attract the attention of billionaire private investors such as Larry Page of Google to really kick start it. KurzweilAI quotes Worden, “I think we’ll be on the moons of Mars by 2030 or so. Larry [Page] asked me a couple weeks ago how much it would cost to send people one way to Mars and I told him $10 billion, and his response was, ‘Can you get it down to 1 or 2 billion?’ So now we’re starting to get a little argument over the price.”

Today’ propulsion systems don’t quite provide the fictional Starship Enterprise’s Warp drive capabilities – creating a subspace bubble to envelop the starship, distorting the local spacetime continuum, and moving the starship at velocities (warp factors) that exceed the speed of light. H+ talked with former NASA propulsion physicist Marc Millis earlier in the year to ask about the feasibility of such technologies. He says our current knowledge of gravity and faster-than-light physics is highly speculative and fraught with paradox. He also says, however, that the propulsion options are numerous to get to Mars given today’s technology, “…depending upon how much you want to take with you… there’s either nuclear-thermal propulsion or variations where the nuclear reactor is part of the actual rocket engine.”

(Photo courtesy of Kevin Parker.) Nuclear propulsion, in fact, may be used in conjunction with solar and electric propulsion to build a hundred year starship. But if power can be beamed to the ship using microwave thermal propulsion, then “… you don’t have to carry all the fuel; and then you use that [microwave] energy… to heat a propellant,” says Worden. H+ has been in contact with several companies involved in developing electric power beaming by the use of either microwaves or lasers. Dmitriy Tseliakhovich, founder of Escape Dynamics LLC, wants to go a step further. Collaborating with the research groups of Professor Harry Atwater at Caltech and Dr. Kevin Parkin from the Carnegie Mellon University located at NASA Ames Research Center, Singularity University, Autodesk, Microwave Sciences and other companies, Tseliakhovich wants to develop a prototype microwave thermal thruster using beamed propulsion. With an infusion of capital based on the prototype, they hope to drop the cost of space access by more than “an order of magnitude” and “finally open space for medium- and small-size businesses.”

If Stephen Hawking is right, then we humans must colonize space in order to survive:

It will likely take the will and vision of a John Kennedy to fully initiate the 21st Century equivalent of a “flags and footprints” Apollo program to the stars – not very likely given today’s political climate. But it may be possible that a NASA-led but venture-funded ship similar to Virgin Galactic SpaceShipTwo could be our ticket to a new home in the sun.





Source | H+ Magazine

DoCoMo Shows Prototype Augmented Reality Display

Sunday, October 24th, 2010

NTT DoCoMo has developed a tiny display that clips onto a pair of eyeglasses and provides navigation services or information about local shops.

The prototype system, called AR Walker, includes a gyro sensor that can detect which way the wearer is facing to provide directions. It connects wirelessly to a mobile phone, which runs the software and provides the GPS data.

The research is being done with Olympus, and the system is currently intended only for use while walking and for use in Japan, but Ori said it might one day be available overseas. A potential use for it would be at museums or other tourist attractions.





Source | PC World

Body organs can send status updates to your cellphone

Sunday, October 24th, 2010

For cardiac patients such as myself, too much excitement can be a shocking experience. If my heart rate gets too high the implanted defibrillator in my chest can think I’m having a heart attack and give me a friendly remedial shock. But such nasty surprises could soon become less of a concern for people like me – by giving our hearts their very own IP addresses.

Dutch research organisation IMEC, based in Eindhoven, this week demonstrated a new type of wireless body area network (BAN). Dubbed the Human++ BAN platform, the system converts IMEC’s ultra-low-power electrocardiogram sensors into wireless nodes in a short-range network, transmitting physiological data to a hub – the patient’s cellphone. From there, the readings can be forwarded to doctors via a Wi-Fi or 3G connection. They can also be displayed on the phone or sound an alarm when things are about to go wrong, giving patients like me a chance to try to slow our heart rates and avoid an unnecessary shock.

Julien Penders, who developed the system, says it can also work with other low-power medical sensors, such as electroencephalograms (EEGs) to monitor neurological conditions or electromyograms to detect neuromuscular diseases. Besides helping those already diagnosed with chronic conditions, BANs could be used by people at risk of developing medical problems – the so-called “worried well” – or by fitness enthusiasts and athletes who want to keep tabs on their physiological processes during training.

Tied to an Android

IMEC’s technology is not the first BAN, but integrates better than earlier versions with the gadgets that many people carry around with them. IMEC has created a dongle that plugs into the standard SD memory card interface of a cellphone to stream data from the sensors in real time and allow the phone to reconfigure the sampling frequency of sensors on the fly. The associated software runs on Google’s Android cellphone operating system.

However, IMEC has eschewed common short-range wireless standards such as Bluetooth in favour of the so-called nRF24L01+ radio designed by Nordic Semiconductor in Oslo, Norway. “The problem with Bluetooth is that it will increase the power consumption on the sensor side,” says Penders. Using the Nordic system, IMEC’s sensors can run continuously, transmitting every 100 milliseconds, for up to seven days between recharges – a Bluetooth system would barely last a day, Penders says.

In the current design, the ECG electrodes are connected to a small necklace that contains the transmitter and battery. The next step will be to use an ultra-low-power radio transmitter, still in development at IMEC, to improve the stamina and portability of the sensors.

With around 18 million people in the UK living with chronic disease, “telehealth” monitoring like this is the way things are going, says Mike Knapton, associate medical director at the British Heart Foundation. Devices already exist that allow people with pacemakers and defibrillators to send telemetry from their implants via a landline to doctors. But using mobile phones would be the natural next step, he says.

Penders presented the work at the Wireless Health Conference in San Diego, California, this week.

Source | New Scientist

Exoskeleton helps the paralysed walk again

Sunday, October 24th, 2010

A new exoskeleton called eLEGS is being readied for clinical trials by Berkeley Bionics, specifically designed as a rehabilitation device to help restore walking function to people with spinal cord injuries, as well as improving blood circulation and digestion.

The suit consists of a backpack-mounted controller connected to robotic legs. It is driven by four motors, one for each hip and knee. Berkeley Bionics claims eLEGS has the largest range of knee flexion of any exoskeleton, a feature they say offers a more natural gait than other exoskeletons.

Source | New Scientist

Google Cars Drive Themselves, in Traffic

Sunday, October 24th, 2010

Google has stated on its official blog that it has developed technology for cars that can drive themselves.

“Our automated cars, manned by trained operators, just drove from our Mountain View campus to our Santa Monica office and on to Hollywood Boulevard,” said Sebastian Thrun, Distinguished Software Engineer at Google and also Professor of Computer Science and director of the Artificial Intelligence Laboratory at Stanford University.

“We’ve driven down Lombard Street, crossed the Golden Gate bridge, navigated the Pacific Coast Highway, and even made it all the way around Lake Tahoe. All in all, our self-driving cars have logged over 140,000 miles. We think this is a first in robotics research.” (Yes, they also have a trained safety driver behind the wheel who can “take over as easily as one disengages cruise control,” and a trained software operator in the passenger seat to monitor the software.)

With help from the best engineers from the DARPA Challenges, the automated cars use video cameras, radar sensors and a laser range finder to “see” other traffic, as well as detailed maps (which they collect using manually driven vehicles) to navigate the road ahead. “This is all made possible by Google’s data centers, which can process the enormous amounts of information gathered by our cars when mapping their terrain.”


Microchip technology rapidly identifies compounds for nerve-cell regeneration

Saturday, October 23rd, 2010

Engineers at MIT have used a new microchip technology to rapidly test potential drugs on tiny worms called C. elegans, which are often used in studies of the nervous system. Using the new technology, associate professor Mehmet Fatih Yanik and his colleagues rapidly performed laser surgery, delivered drugs, and imaged the resulting neuron regrowth in thousands of live animals.

“Our technology helps researchers rapidly identify promising chemicals that can then be tested in mammals and perhaps even in humans,” says Yanik. Using this technique, the researchers have already identified one promising class of neuronal regenerators.

Scientists have long sought the ability to regenerate nerve cells, or neurons, which could offer a new way to treat spinal-cord damage as well as neurological diseases such as Alzheimer’s or Parkinson’s. Many chemicals can regenerate neurons grown in Petri dishes in the lab, but it’s difficult and time-consuming to identify those chemicals that work in live animals, which is critical for developing drugs for humans.The paper will appear in the online edition of the Proceedings of the National Academy of Sciences the week of Oct. 11.

C. elegans is a useful model organism for neuron regeneration because it is optically transparent, and its entire neural network is known. Yanik and colleagues had previously developed a femtosecond laser nanosurgery technique that allowed them to cut and observe regeneration of individual axons — long extensions of neurons that send signals to neighboring cells. Their femtosecond laser nanosurgery technique uses tightly-focused infrared laser pulses that are shorter than a billionth of a second. This allows the laser to penetrate deep into the animals without damaging the tissues on its way, until the laser beam hits its final target.

In the PNAS study, the researchers used their microchip technology to rapidly cut the axons of single neurons that sense touch. Moving single worms from their incubation well to an imaging microchip, immobilizing them and performing laser surgery takes only about 20 seconds, which allows thousands of surgeries to be performed in a short period of time.

After laser surgery, each worm is returned to its incubation well and treated with a different chemical compound. C. elegans neurons can partially regrow without help, which allowed Yanik’s team to look for drugs that can either enhance or inhibit this regrowth. After two or three days, the researchers imaged each worm to see if the drugs had any effect.

The MIT team found that a compound called staurosporine, which inhibits certain enzymes known as PKC kinases, had the strongest inhibitory effect. In a follow-up study, they tested some compounds that activate these kinases, and found that one of them stimulated regeneration of neurons significantly. Some of Yanik’s students are now testing those compounds on neurons derived from human embryonic stem cells.

The new technology represents a significant advance in the level of automation that can be achieved in C. elegans studies, says Michael Bastiani, professor of biology at the University of Utah. “Using ‘classical’ handling techniques you can cut and assay at most 100 animals per day,” he says. “Yanik’s automated system seems like it could increase throughput by at least 10-fold over that number.” He points out that one potential limitation of the system is that it might not pick up the effects of neural regenerators that can’t penetrate the worm’s cuticle, a thick outer layer that surrounds the skin.

However, chemicals can still be taken up through the worms’ digestive tract, which is an important test for checking whether chemicals would work on live animals, says Yanik.

This microchip technology can also be used to screen compounds for their effects on other diseases such as Alzheimer’s, Parkinson’s and ALS, says Yanik.

Source | MIT News

World’s First Robot Census

Saturday, October 23rd, 2010

When a Carnegie Mellon student decided to count all the robots on campus, she had no idea it was the spark that would start a conflagration.

According to the best estimate of Heather “Marilyn Monrobot” Knight, a graduate student at Carnegie Mellon University, there are almost certainly more robots associated with CMU than there are people working in the university’s robotics program. (One of them, Keepon, is pictured at left.)

“Which is insanity,” says Knight, referring not to the number of robots–CMU is one of the few universities in the U.S. to offer a degree in robotics–but to the number of people on campus whose entire working existence is devoted to creating them. “There are 599 people [in the robotics program], including 100 or 120 people working in a government lab that’s off campus,” she adds. “I’m not sure if they’re allowed to tell me about their robots–the estimate I’ve heard is between 100 and 300.” (The lab in question does not do classified work, so it’s likely to be included in future editions of the census.)

In total, Knight can officially confirm the existence of 547 robots on campus (that number doesn’t include the population in the secret government lab) and that’s just the beginning: as word spreads through the Maker community and the effort’s Twitter feed, she plans to take the world’s first robot census as far as it will go, eventually canvassing as wide a swath of the home-brew and university robotics efforts as possible.

Just by asking the keepers of the world’s automatons to submit their research subjects, Knight is posing a challenging question: What is a robot?

“Everyone agrees there are 3 minimum requirements.” says Knight. “These are the minimum, but not sufficient requirements: They must act in the world, sense the world, and they need to have computation.”

The problem with this traditional definition of a robot is that it encompasses almost all forms of automation, including thermostats and washing machines. This has led some thinkers to add to the list of characteristics things like agency, autonomy and embodiment.

The robot census doesn’t answer these questions, but it does force its participants to engage with them.

“It’s kind of like, [what constitutes a robot] is not really an argument that’s worth having–but it is a discussion that’s worth having,” says Knight.

Knight’s inspiration for initiating the census is her own work in integrating robots into people’s everyday lives. Westerners, especially, tend to harbor negative associations with robots. “It’s hard to compete with the Terminator movie,” says Knight.

By showcasing the incredible variety and utility of the robots at Carnegie Mellon and elsewhere, Knight thinks she might be able to punch through people’s preconceived notions (and pave the way to getting robots into the home, where they can do the most damage once they’re activated by SkyNet’s central hive mind).

“Every time I give a talk I have people from age 8 to 50 that say they or their child wants to know how to get into robotics,” says Knight. “There’s so much interest and it’s about figuring out where those applications are, not just in theory but in real life.”

Source | Technology Review

New technologies confuse reality and fiction: Pope

Saturday, October 23rd, 2010

ROME – Pope Benedict XVI said on Thursday that the media’s increasing reliance on images, fuelled by the endless development of new technologies, risked confusing real life with virtual reality.

“New technologies and the progress they bring can make it impossible to distinguish truth from illusion and can lead to confusion between reality and virtual reality,” the pope said.

“The image can also become independent from reality, it can give birth to a virtual word, with various consequences — above all the risk of indifference towards real life,” he said.

He said the use of new technologies should set off “an alarm bell.”

Benedict’s comments came in a speech to participants at a world congress of Catholic media, organised by the Pontifical Council for Social Communications.

Computer beats human at Japanese chess for first time

Saturday, October 23rd, 2010

A computer has beaten a human at shogi, otherwise known as Japanese chess, for the first time. No big deal, you might think. After all, computers have been beating humans at western chess for years, and when IBM’s Deep Blue beat Gary Kasparov in 1997, it was greeted in some quarters as if computers were about to overthrow humanity.

That hasn’t happened yet, but after all, western chess is a relatively simple game, with only about 10123 possible games existing that can be played out. Shogi is a bit more complex, though, offering about 10224 possible games.

The Mainichi Daily News reports that top women’s shogi player Ichiyo Shimizu took part in a match staged at the University of Tokyo, playing against a computer called Akara 2010. Akara is apparently a Buddhist term meaning 10224, the newspaper reports, and the system beat Shimizu in six hours, over the course of 86 moves.

Japan’s national broadcaster, NHK, reported that Akara “aggressively pursued Shimizu from the beginning”. It’s the first time a computer has beaten a professional human player.

The Japan Shogi Association, incidentally, seems to have a deep fear of computers beating humans. In 2005, it introduced a ban on professional members playing computers without permission, and Shimizu’s defeat was the first since a simpler computer system was beaten by a (male) champion, Akira Watanabe, in 2007.

Perhaps the association doesn’t mind so much if a woman is beaten: NHK reports that the JSA will conduct an in-depth analysis of the match before it decides whether to allow the software to challenge a higher-ranking male professional player. Meanwhile, humans will have to face up to more flexible computers, capable of playing more than just one kind of game.

And IBM has now developed Watson, a computer designed to beat humans at the game show Jeopardy. Watson, says IBM, is “designed to rival the human mind’s ability to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately, demonstrate confidence to deliver precise final answers”. IBM say they have improved artificial intelligence enough that Watson will be able to challenge Jeopardy champions, and they’ll put their boast to the test soon, says The New York Times.

I’ll leave you with these wise and telling words from the defeated Shimizu: “It made no eccentric moves, and from partway through it felt like I was playing against a human,” Shimizu told the Mainichi Daily News. “I hope humans and computers will become stronger in the future through friendly competition.”

Source | New Scientist