Archive for November, 2010

A Brief History of Moore’s Law and The Next Generation of Computer Chips & Semiconductors

Saturday, November 27th, 2010

Super-powerful desktop computers, video game systems, cars, iPads, iPods, tablet computers, cellular phones, microwave ovens, high-def television… Most of the luxuries we enjoy during our daily lives are a result of the tremendous advancements of computing power which was made possible by the development of the transistor.

The first patent for transistors was filed in Canada in 1925 by Julius Edgar Lilienfeld; this patent, however, did not include any information about devices that would actually be built using the technology. Later, in 1934, the German inventor Oskar Heil patented a similar device, but it really wasn’t until 1947 that John Bardeen and Walter Brattain at Bell Telephone Labs produced the first point-contact transistor. During their initial testing phases, they produced a few of them and assembled an audio amplifier which was later presented to various Bell Labs executives. What impressed them more than anything else was the fact that the transistor didn’t need time to warm up, like it’s predecessor the vacuum tube did. People immediately started to see the potential of the transistor for computing. The original computers from the late-1940s were gigantic, with some even taking up entire rooms. These huge computers were assembled with over 10,000 vacuum tubes and took a great deal of energy to run. Almost ten years later, Texas Instruments physically produced the first silicon transistor. In 1956, Bardeen and Brattain won the Nobel Prize in physics, along with William Shockely, who also did critically important work on the transistor.

Today, trillions of transistors are produced each year, and the transistor is considered one of the greatest technological achievements of the 20th century. The number of transistors on an integrated circuit has been doubling approximately every two years, as rate that has held strong for more than half a century. This nature of this trend was first proposed by the Intel co-founder, Gordon Moore in 1965. The name of the trend was coined “Moore’s Law” and its accuracy is now used in the semiconductor industry as somewhat of a guide to define long-terms planning and the ability to accurately set targets for R&D. But it’s likely that our ability to double our computing power this way will eventually break down.

For years, we have been hearing announcements from chip makers stating that they have figured out new ways to shrink the size of transistors. But in truth we are simply running out of space to work with. The question here is “How Far Can Moore’s Law Go?” Well, we don’t know for sure. We currently use etchings of ultraviolet radiation on microchips, and it’s this very etching process that allows us to cram more and more transistors on the chip. Once we start hitting layers and components that are 5 atoms thick, the Heisenberg Uncertainty Principle starts to kick in and we would no longer know where the electron is. Most likely, the electrons on such a small transistor would leak out, causing the circuit to short. There are also issues of heat which is ultimately caused by the increased power. Some have suggested we could use X-rays instead of ultraviolet light to etch onto the chip—but while it’s been shown that X-rays will etch smaller and smaller components, the energy used is also proportionally larger, causing them to blast right through the silicon.

The other questions are the steps that we are going to take to find a suitable replacement for silicon when we hit the tipping point. We are of course looking at the development of quantum computers, molecular computers, protein computers, DNA computers, and even optical computers. If we are creating circuits that are the size of atoms, then why not compute with atoms themselves? This is now our goal.  There are, however, enormous roadblocks to overcome. First of all, molecular computers are so small that you can’t even see them—how do you wire up something so small? The other question is our ability to determine a viable way to mass-produce them. There are a great deal of talk about the world of quantum computers right now, but there are still hurdles to overcome, including impurities, vibrations and even decoherence. Every time we’ve tried to look at one of these exotic architectures to replace silicon, we find a problem. Now, this doesn’t mean that we won’t make tremendous advances with these different computing architectures or figure out a way to extend Moore’s law beyond 2020. We just don’t quite know how yet.

So let’s look at some of the things that large chip makers, labs and think tanks are currently working on; trying to find a suitable replacement for silicon and take computing to the next level.

With some 2% of the world’s total energy being consumed by building and running computer equipment, a pioneering research effort could shrink the world’s most powerful supercomputer processors to the size of a sugar cube, IBM scientists say.

So I think the next decade of computing advancements is going to bring us gadgets and devices that today we only dream of. What technology will dominate the Post Silicon Era? What will replace Silicon Valley? No one knows. But nothing less than the wealth of nations and the future of civilization may rest on this question.

Source | Big Think

A Step Towards Idoru?

Thursday, November 25th, 2010

Pop princess Hatsune Miku is storming the music scene.

With her long cerulean pigtails and her part-schoolgirl, part-spy outfit, she’s easy on the eyes. Yes, her voice sounds like it might have gone through a little –- OK, a lot –- of studio magic. Legions of screaming fans and the requisite fan sites? She’s got ‘em.

And, like many of her hot young singer peers, Miku is extremely, proudly fake. Like, 3-D hologram fake.

Miku is a singing, digital avatar created by Crypton Future Media that customers can purchase and then program to perform any song on a computer.

Crypton uses voices recorded by actors and runs them through Yamaha Corp.’s Vocaloid software -– marketed as “a singer in a box.” The result: A synthesized songstress that sounds far better than you ever have in your shower.

Crypton has even set up a record label called KarenT, with its own YouTube channel. The Vocaloidism blog has more details about the software.

A few months ago, a 3-D projection of Miku pranced around several stadium stages as part of a concert tour, where capacity crowds waved their glow sticks and sang along.  Here’s the starlet performing a jingle titled, appropriately, “World Is Mine.”




The Blu-ray and DVD recordings of those events were recently released, according to SingularityHub, which also has more videos.

The virtual diva’s albums have also topped the Japanese charts. She’s on Facebook. We’ve seen living, breathing musicians at the Hollywood Bowl get less love.

It all reminds us a bit of S1m0ne. Remember her? She’s the sultry actress who captivated adoring audiences in the eponymous 2002 film. She was also completely computer-generated by Al Pacino’s character.

Somewhere, we bet she’s a little bit jealous.

Source | New York Times

With Kinect Controller, Hackers Take Liberties

Thursday, November 25th, 2010

When Oliver Kreylos, a computer scientist, heard about the capabilities of Microsoft’s new Kinect gaming device, he couldn’t wait to get his hands on it. “I dropped everything, rode my bike to the closest game store and bought one,” he said.

But he had no interest in playing video games with the Kinect, which is meant to be plugged into an Xbox and allows players to control the action onscreen by moving their bodies.

Mr. Kreylos, who specializes in virtual reality and 3-D graphics, had just learned that he could download some software and use the device with his computer instead. He was soon using it to create “holographic” video images that can be rotated on a computer screen. A video he posted on YouTube last week caused jaws to drop and has been watched 1.3 million times.

Mr. Kreylos is part of a crowd of programmers, roboticists and tinkerers who are getting the Kinect to do things it was not really meant to do. The attraction of the device is that it is outfitted with cameras, sensors and software that let it detect movement, depth, and the shape and position of the human body.

Companies respond to this kind of experimentation with their products in different ways — and Microsoft has had two very different responses since the Kinect was released on Nov. 4. It initially made vague threats about working with law enforcement to stop “product tampering.” But by last week, it was embracing the benevolent hackers.

“Anytime there is engagement and excitement around our technology, we see that as a good thing,” said Craig Davidson, senior director for Xbox Live at Microsoft. “It’s naïve to think that any new technology that comes out won’t have a group that tinkers with it.”

Microsoft and other companies would be wise to keep an eye on this kind of outside innovation and consider wrapping some of the creative advances into future products, said Loren Johnson, an analyst at Frost & Sullivan who follows digital media and consumer electronics.

“These adaptations could be a great benefit to their own bottom line,” he said. “It’s a trend that is undeniable, using public resources to improve on products, whether it be the Kinect or anything else.”

Microsoft invested hundreds of millions of dollars in Kinect in the hopes of wooing a broader audience of gamers, like those who enjoy using the motion-based controllers of the Nintendo Wii.

Word of the technical sophistication and low price of the device spread quickly in tech circles.

Building a device with the Kinect’s capabilities would require “thousands of dollars, multiple Ph.D.’s and dozens of months,” said Limor Fried, an engineer and founder of Adafruit Industries, a store in New York that sells supplies for experimental hardware projects. “You can just buy this at any game store for $150.”

On the day the Kinect went on sale, Ms. Fried and Phillip Torrone, a designer and senior editor of Make magazine, which features do-it-yourself technology projects, announced a $3,000 cash bounty for anyone who created and released free software allowing the Kinect to be used with a computer instead of an Xbox.

Microsoft quickly gave the contest a thumbs-down. In an interview with CNet News, a company representative said that it did not “condone the modification of its products” and that it would “work closely with law enforcement and product safety groups to keep Kinect tamper-resistant.”

That is not much different from the approach taken by Apple, which has released software upgrades for its iPhone operating system in an effort to block any unsanctioned hacks or software running on its devices.

But other companies whose products have been popular targets for tinkering have actively encouraged it. One example is iRobot, the company that makes the Roomba, a small robotic vacuum cleaner. That product was so popular with robotics enthusiasts that the company began selling the iRobot Create, a programmable machine with no dusting capabilities.

Mr. Davidson said Microsoft now had no concerns about the Kinect-hacking fan club, but he said the company would be monitoring developments. A modification that compromises the Xbox system, violates the company’s terms of service or “degrades the experience for everyone is not something we want,” he said.

Other creative uses of the Kinect involve drawing 3-D doodles in the air and then rotating them with a nudge of the hand, and manipulating colorful animated puppets on a computer screen. Most, if not all, of the prototypes were built using the open-source code released as a result of the contest sponsored by Ms. Fried and Mr. Torrone, which was won by Hector Martin, a 20-year-old engineering student in Spain.

The KinectBot, cobbled together in a weekend by Philipp Robbel, a Ph.D. candidate at the Massachusetts Institute of Technology, combines the Kinect and an iRobot Create. It uses the Kinect’s sensors to detect humans, respond to gesture and voice commands, and generate 3-D maps of what it is seeing as it rolls through a room.

Mr. Robbel said the KinectBot offered a small glimpse into the future of machines that could aid in the search for survivors after a natural disaster.

“This is only the tip of the iceberg,” he said of the wave of Kinect experimentation. “We are going to see an exponential number of videos and tests over the coming weeks and months as more people get their hands on this device.”

Toying around with the Kinect could go beyond being a weekend hobby. It could potentially lead to a job. In late 2007, Johnny Lee, then a graduate student at Carnegie Mellon, was so taken by the Wii that he rigged a system that would allow it to track his head movements and adjust the screen perspective accordingly.

A video of Mr. Lee demonstrating the technology was a hit on YouTube, as were his videos of other Wii-related projects. By June 2008, he had a job at Microsoft as part of the core team working on the Kinect software that distinguishes between players and parts of the body.

“The Wii videos made me much more visible to the products people at Xbox,” Mr. Lee said. “They were that much more interested in me because of the videos.”

Mr. Lee said he was “very happy” to see the response the Kinect was getting among people much like himself. “I’m glad they are inspired and that they like the technology,” he said. “I think they’ll be able to do really cool things with it.”

Source | New York Times

How to Train Your Own Brain

Thursday, November 25th, 2010

Technology might not be advanced enough yet to let people read someone else’s mind, but researchers are at least inching closer to helping people to read and control their own. In a study presented last week at the Society for Neuroscience meeting in San Diego, scientists used a combination of brain-scanning and feedback techniques to train subjects to move a cursor up and down with their thoughts. The subjects could perform this task after just five minutes of training.

The scientists hope to use this information to help addicts learn to control their own brain states and, consequently, their cravings.

Scientists have previously shown that people can learn to consciously control their brain activity if they’re shown their brain activity data in real time—a technique called real-time functional magnetic resonance imaging (fMRI). Researchers have used this technology effectively to teach people to control chronic pain and depression. They’ve been pursuing similar feedback methods to help drug users kick their addictions.

But these efforts have been difficult to put into practice. Part of the problem is that scientists have had to choose which part of the brain to focus on, based on existing knowledge of neuroscience. But that approach may miss out on areas that are also important for the particular function under study.

In addition, focusing on a limited region adds extra noise to the system—much like looking too closely at just one swatch of a Pointillist painting—the mix of odd colors doesn’t make sense until you step back and see how the dots fit together. Psychologist Anna Rose Childress, Jeremy Magland, and their colleagues at the University of Pennsylvania have overcome this issue by designing a new system of whole-brain imaging and pairing it with an algorithm that let them determine which regions of the brain are most centrally involved in a certain thought process.

“I think it’s very exciting, and I think it’s likely to be just the tip of a large iceberg of possibilities,” says Christopher deCharms, a neuroscientist and founder of Omneuron, a company dedicated to using real-time fMRI to visualize brain function. “It’s a small case demonstration that you can do this and you can do it in real time.”

Childress asked 11 healthy controls and three cocaine addicts to watch a feedback screen while alternately envisioning two 30-second scenarios: Repeatedly swatting a tennis ball to someone, and navigating from room to room in a familiar place. By analyzing whole-brain activity, researchers found that a part of the brain called the supplementary motor area was most active during an imagined game of tennis. They then linked this pattern to an upward movement of a computer cursor. They did the same with the navigation task, linking it to downward movement of the cursor. After four cycles or fewer—less than five minutes of training—the subjects had learned to alternate between the two states of mind, as well as associate each one with its corresponding cursor position. From there onward, they could move the cursor up or down with their thoughts.

“Conventional technology used up until now monitors a designated region of the brain, but the data tend to be noisy,” Childress says. As a result, it’s harder for researchers to determine what regions of the brain are important to control for feedback exercises. “But whole-brain information cancels out a lot of the noise.”

The researchers found that both addicts and healthy people could control their state of mind equally well, something Childress says is encouraging for future studies. “The patients who have trouble controlling their craving could still demonstrate control over this sort of non-emotional test,” she says. That confirms what earlier studies had suggested: Addicts’ cognitive control issues are not linked to more general thinking, but instead limited to more emotionally charged thoughts, like cravings.

However, Childress’s team will need to develop specialized tasks to figure out how to apply this to addiction and other disorders. For therapy, “You really need feedback from localized regions that have to do with their disease, and have people learn to control them,” says Rainer Goebel, a professor of psychology at the University of Maastricht in the Netherlands who has done similar work with depression patients.

The University of Pennsylvania researchers are now developing just such a training program. For example, researchers might show cocaine addicts images or videos that involve stereotyped cocaine images, classify the brain region, and then use brain training to teach people how to dampen the activity in that part of the brain.

Source | Technology Review

1,000 mph car to be built next year

Thursday, November 25th, 2010

The 20 km long, 1.5 km track for the record attempt must be completely clear of all loose stones before the run, as an impact with a 1,000 mph stone could cause catastrophic damage to the wheels or car body. Around 300 local people are already working on sweeping the track clean, and Noble has advertised in the UK for helpers, offering “No wages, constant heat, tough work in beautiful but remote Hakskeen Pan.”Director of the project, Richard Noble, who once held the World Land Speed record, said construction of the full-scale will begin in January, and an attempt on the World Land Speed record will be made in 2012. The aim of the project is to promote science and engineering and to inspire young people. An extensive educational program in about 25,000 schools in the UK has always been part of the vision.

The car body will be made of a thin alloy, and the 90 cm, 97 kg wheels will be made of a solid . Research aimed at selecting the best alloy for the job is continuing, but the choice will be important because the wheels will be rotating faster than any wheel in history, reaching 170 rotations per second (about 10,200 rpm), and stresses at the rim of around 150 megapascals.

The wheels will also be in contact with the lake bed, and so some surface damage is inevitable, but the alloy must not allow cracks in the wheels that could lead to their destruction, especially during the second run in the record attempt. Research on the effects of the impacts of grit on various alloy samples are being carried out at the Cavendish Laboratory at the University of Cambridge.

The vehicle will be powered by a Falcon rocket and an EJ200 from a Eurofighter Typhoon military plane. The jet produces nine tons of thrust, while the rocket produces an additional 12 tons of thrust.

EUROJET EJ200. Approximately half the thrust of BLOODHOUND SSC is provided by a EUROJET EJ200, a highly sophisticated military turbofan normally found in the engine bay of a Eurofighter Typhoon.

The World Land attempt will be made at a dry lake bed called the Hakskeen Pan in Northern Cape Province in South Africa. The Bloodhound will need to make two successful runs within an hour over a measured mile in order to break the record. The average of the two runs is the record speed, and not the fastest run. The team plans to swap the rocket motor for a fully primed rocket after the first run, but hopes to avoid needing to change the wheels as well.

The 20 km long, 1.5 km track for the record attempt must be completely clear of all loose stones before the run, as an impact with a 1,000 mph stone could cause catastrophic damage to the wheels or car body. Around 300 local people are already working on sweeping the track clean, and Noble has advertised in the UK for helpers, offering “No wages, constant heat, tough work in beautiful but remote Hakskeen Pan.”





The current record is held by the Thrust SuperSonic Car, which achieved 763 mph (1,228 km/h) in its record attempt in 1997. Three people who worked on Thrust are working on the Bloodhound: Wing Commander Andy Green, who will drive the car, Ron Ayres, the chief aerodynamicist, and Richard Noble, the director.

The project is well-funded, with Mr Noble saying there are more companies wanting to sponsor the car than they can accept, and even though the venture is private and non-profit, it has also received support from the UK government in the form of two Typhoon jet engines. Other major supporters include aerospace companies Lockheed Martin, Cosworth (manufacturer of the F1 jet engine) and Hampson Industries.

Source | Physorg

In Cybertherapy, Avatars Assist With Healing

Thursday, November 25th, 2010

Advances in artificial intelligence and computer modeling are allowing therapists to practice “cybertherapy” more effectively, using virtual environments to help people work through phobias, like a fear of heights or of public spaces.

Researchers are populating digital worlds with autonomous, virtual humans that can evoke the same tensions as in real-life encounters. People with social anxiety are struck dumb when asked questions by a virtual stranger. Heavy drinkers feel strong urges to order something from a virtual bartender, while gamblers are drawn to sit down and join a group playing on virtual slot machines.

In a recent study,  researchers at USC found that a virtual confidant elicits from people the crucial first element in any therapy: self-disclosure.The researchers are incorporating the techniques learned from this research into a virtual agent being developed for the Army, called SimCoach. Guided by language-recognition software, SimCoach — there are several versions, male and female, young and older, white and black — appears on a computer screen and can conduct a rudimentary interview, gently probing for possible mental troubles.

And research at the University of Quebec suggests where virtual humans are headed: realistic three-dimensional forms that can be designed to resemble people in the real world.

Source | New York Times

Growing Up Digital, Wired for Distraction

Wednesday, November 24th, 2010

By all rights, Vishal, a bright 17-year-old, should already have finished the book, Kurt Vonnegut’s “Cat’s Cradle,” his summer reading assignment. But he has managed 43 pages in two months.

He typically favors Facebook, YouTube and making digital videos. That is the case this August afternoon. Bypassing Vonnegut, he clicks over to YouTube, meaning that tomorrow he will enter his senior year of high school hoping to see an improvement in his grades, but without having completed his only summer homework.

On YouTube, “you can get a whole story in six minutes,” he explains. “A book takes so long. I prefer the immediate gratification.”

Students have always faced distractions and time-wasters. But computers and cellphones, and the constant stream of stimuli they offer, pose a profound new challenge to focusing and learning.

Researchers say the lure of these technologies, while it affects adults too, is particularly powerful for young people. The risk, they say, is that developing brains can become more easily habituated than adult brains to constantly switching tasks — and less able to sustain attention.

“Their brains are rewarded not for staying on task but for jumping to the next thing,” said Michael Rich, an associate professor at Harvard Medical School and executive director of the Center on Media and Child Health in Boston. And the effects could linger: “The worry is we’re raising a generation of kids in front of screens whose brains are going to be wired differently.”

But even as some parents and educators express unease about students’ digital diets, they are intensifying efforts to use technology in the classroom, seeing it as a way to connect with students and give them essential skills. Across the country, schools are equipping themselves with computers, Internet access and mobile devices so they can teach on the students’ technological territory.

It is a tension on vivid display at Vishal’s school, Woodside High School, on a sprawling campus set against the forested hills of Silicon Valley. Here, as elsewhere, it is not uncommon for students to send hundreds of text messages a day or spend hours playing video games, and virtually everyone is on Facebook.

The principal, David Reilly, 37, a former musician who says he sympathizes when young people feel disenfranchised, is determined to engage these 21st-century students. He has asked teachers to build Web sites to communicate with students, introduced popular classes on using digital tools to record music, secured funding for iPads to teach Mandarin and obtained $3 million in grants for a multimedia center.

He pushed first period back an hour, to 9 a.m., because students were showing up bleary-eyed, at least in part because they were up late on their computers. Unchecked use of digital devices, he says, can create a culture in which students are addicted to the virtual world and lost in it.

“I am trying to take back their attention from their BlackBerrys and video games,” he says. “To a degree, I’m using technology to do it.”

The same tension surfaces in Vishal, whose ability to be distracted by computers is rivaled by his proficiency with them. At the beginning of his junior year, he discovered a passion for filmmaking and made a name for himself among friends and teachers with his storytelling in videos made with digital cameras and editing software.

He acts as his family’s tech-support expert, helping his father, Satendra, a lab manager, retrieve lost documents on the computer, and his mother, Indra, a security manager at the San Francisco airport, build her own Web site.

But he also plays video games 10 hours a week. He regularly sends Facebook status updates at 2 a.m., even on school nights, and has such a reputation for distributing links to videos that his best friend calls him a “YouTube bully.”

Several teachers call Vishal one of their brightest students, and they wonder why things are not adding up. Last semester, his grade point average was 2.3 after a D-plus in English and an F in Algebra II. He got an A in film critique.

“He’s a kid caught between two worlds,” said Mr. Reilly — one that is virtual and one with real-life demands.

Vishal, like his mother, says he lacks the self-control to favor schoolwork over the computer. She sat him down a few weeks before school started and told him that, while she respected his passion for film and his technical skills, he had to use them productively.

“This is the year,” she says she told him. “This is your senior year and you can’t afford not to focus.”

It was not always this way. As a child, Vishal had a tendency to procrastinate, but nothing like this. Something changed him.

Growing Up With Gadgets

When he was 3, Vishal moved with his parents and older brother to their current home, a three-bedroom house in the working-class section of Redwood City, a suburb in Silicon Valley that is more diverse than some of its elite neighbors.

Thin and quiet with a shy smile, Vishal passed the admissions test for a prestigious public elementary and middle school. Until sixth grade, he focused on homework, regularly going to the house of a good friend to study with him.

But Vishal and his family say two things changed around the seventh grade: his mother went back to work, and he got a computer. He became increasingly engrossed in games and surfing the Internet, finding an easy outlet for what he describes as an inclination to procrastinate.

“I realized there were choices,” Vishal recalls. “Homework wasn’t the only option.”

Several recent studies show that young people tend to use home computers for entertainment, not learning, and that this can hurt school performance, particularly in low-income families. Jacob L. Vigdor, an economics professor at Duke University who led some of the research, said that when adults were not supervising computer use, children “are left to their own devices, and the impetus isn’t to do homework but play around.”

Research also shows that students often juggle homework and entertainment. The Kaiser Family Foundation found earlier this year that half of students from 8 to 18 are using the Internet, watching TV or using some other form of media either “most” (31 percent) or “some” (25 percent) of the time that they are doing homework.

At Woodside, as elsewhere, students’ use of technology is not uniform. Mr. Reilly, the principal, says their choices tend to reflect their personalities. Social butterflies tend to be heavy texters and Facebook users. Students who are less social might escape into games, while drifters or those prone to procrastination, like Vishal, might surf the Web or watch videos.

The technology has created on campuses a new set of social types — not the thespian and the jock but the texter and gamer, Facebook addict and YouTube potato.

“The technology amplifies whoever you are,” Mr. Reilly says.

For some, the amplification is intense. Allison Miller, 14, sends and receives 27,000 texts in a month, her fingers clicking at a blistering pace as she carries on as many as seven text conversations at a time. She texts between classes, at the moment soccer practice ends, while being driven to and from school and, often, while studying.

Most of the exchanges are little more than quick greetings, but they can get more in-depth, like “if someone tells you about a drama going on with someone,” Allison said. “I can text one person while talking on the phone to someone else.”

But this proficiency comes at a cost: she blames multitasking for the three B’s on her recent progress report.

“I’ll be reading a book for homework and I’ll get a text message and pause my reading and put down the book, pick up the phone to reply to the text message, and then 20 minutes later realize, ‘Oh, I forgot to do my homework.’ ”

Some shyer students do not socialize through technology — they recede into it. Ramon Ochoa-Lopez, 14, an introvert, plays six hours of video games on weekdays and more on weekends, leaving homework to be done in the bathroom before school.

Escaping into games can also salve teenagers’ age-old desire for some control in their chaotic lives. “It’s a way for me to separate myself,” Ramon says. “If there’s an argument between my mom and one of my brothers, I’ll just go to my room and start playing video games and escape.”

With powerful new cellphones, the interactive experience can go everywhere. Between classes at Woodside or at lunch, when use of personal devices is permitted, students gather in clusters, sometimes chatting face to face, sometimes half-involved in a conversation while texting someone across the teeming quad. Others sit alone, watching a video, listening to music or updating Facebook.

Students say that their parents, worried about the distractions, try to police computer time, but that monitoring the use of cellphones is difficult. Parents may also want to be able to call their children at any time, so taking the phone away is not always an option.

Other parents wholly embrace computer use, even when it has no obvious educational benefit.

“If you’re not on top of technology, you’re not going to be on top of the world,” said John McMullen, 56, a retired criminal investigator whose son, Sean, is one of five friends in the group Vishal joins for lunch each day.

Sean’s favorite medium is video games; he plays for four hours after school and twice that on weekends. He was playing more but found his habit pulling his grade point average below 3.2, the point at which he felt comfortable. He says he sometimes wishes that his parents would force him to quit playing and study, because he finds it hard to quit when given the choice. Still, he says, video games are not responsible for his lack of focus, asserting that in another era he would have been distracted by TV or something else.

“Video games don’t make the hole; they fill it,” says Sean, sitting at a picnic table in the quad, where he is surrounded by a multimillion-dollar view: on the nearby hills are the evergreens that tower above the affluent neighborhoods populated by Internet tycoons. Sean, a senior, concedes that video games take a physical toll: “I haven’t done exercise since my sophomore year. But that doesn’t seem like a big deal. I still look the same.”

Sam Crocker, Vishal’s closest friend, who has straight A’s but lower SAT scores than he would like, blames the Internet’s distractions for his inability to finish either of his two summer reading books.

“I know I can read a book, but then I’m up and checking Facebook,” he says, adding: “Facebook is amazing because it feels like you’re doing something and you’re not doing anything. It’s the absence of doing something, but you feel gratified anyway.”

He concludes: “My attention span is getting worse.”

The Lure of Distraction

Some neuroscientists have been studying people like Sam and Vishal. They have begun to understand what happens to the brains of young people who are constantly online and in touch.

In an experiment at the German Sport University in Cologne in 2007, boys from 12 to 14 spent an hour each night playing video games after they finished homework.

On alternate nights, the boys spent an hour watching an exciting movie, like “Harry Potter” or “Star Trek,” rather than playing video games. That allowed the researchers to compare the effect of video games and TV.

The researchers looked at how the use of these media affected the boys’ brainwave patterns while sleeping and their ability to remember their homework in the subsequent days. They found that playing video games led to markedly lower sleep quality than watching TV, and also led to a “significant decline” in the boys’ ability to remember vocabulary words. The findings were published in the journal Pediatrics.

Markus Dworak, a researcher who led the study and is now a neuroscientist at Harvard, said it was not clear whether the boys’ learning suffered because sleep was disrupted or, as he speculates, also because the intensity of the game experience overrode the brain’s recording of the vocabulary.

“When you look at vocabulary and look at huge stimulus after that, your brain has to decide which information to store,” he said. “Your brain might favor the emotionally stimulating information over the vocabulary.”

At the University of California, San Francisco, scientists have found that when rats have a new experience, like exploring an unfamiliar area, their brains show new patterns of activity. But only when the rats take a break from their exploration do they process those patterns in a way that seems to create a persistent memory.

In that vein, recent imaging studies of people have found that major cross sections of the brain become surprisingly active during downtime. These brain studies suggest to researchers that periods of rest are critical in allowing the brain to synthesize information, make connections between ideas and even develop the sense of self.

Researchers say these studies have particular implications for young people, whose brains have more trouble focusing and setting priorities.

“Downtime is to the brain what sleep is to the body,” said Dr. Rich of Harvard Medical School. “But kids are in a constant mode of stimulation.”

“The headline is: bring back boredom,” added Dr. Rich, who last month gave a speech to the American Academy of Pediatrics entitled, “Finding Huck Finn: Reclaiming Childhood from the River of Electronic Screens.”

Dr. Rich said in an interview that he was not suggesting young people should toss out their devices, but rather that they embrace a more balanced approach to what he said were powerful tools necessary to compete and succeed in modern life.

The heavy use of devices also worries Daniel Anderson, a professor of psychology at the University of Massachusetts at Amherst, who is known for research showing that children are not as harmed by TV viewing as some researchers have suggested.

Multitasking using ubiquitous, interactive and highly stimulating computers and phones, Professor Anderson says, appears to have a more powerful effect than TV.

Like Dr. Rich, he says he believes that young, developing brains are becoming habituated to distraction and to switching tasks, not to focus.

“If you’ve grown up processing multiple media, that’s exactly the mode you’re going to fall into when put in that environment — you develop a need for that stimulation,” he said.

Vishal can attest to that.

“I’m doing Facebook, YouTube, having a conversation or two with a friend, listening to music at the same time. I’m doing a million things at once, like a lot of people my age,” he says. “Sometimes I’ll say: I need to stop this and do my schoolwork, but I can’t.”

“If it weren’t for the Internet, I’d focus more on school and be doing better academically,” he says. But thanks to the Internet, he says, he has discovered and pursued his passion: filmmaking. Without the Internet, “I also wouldn’t know what I want to do with my life.”

Clicking Toward a Future

The woman sits in a cemetery at dusk, sobbing. Behind her, silhouetted and translucent, a man kneels, then fades away, a ghost.

This captivating image appears on Vishal’s computer screen. On this Thursday afternoon in late September, he is engrossed in scenes he shot the previous weekend for a music video he is making with his cousin.

The video is based on a song performed by the band Guns N’ Roses about a woman whose boyfriend dies. He wants it to be part of the package of work he submits to colleges that emphasize film study, along with a documentary he is making about home-schooled students.

Now comes the editing. Vishal taught himself to use sophisticated editing software in part by watching tutorials on YouTube. He does not leave his chair for more than two hours, sipping Pepsi, his face often inches from the screen, as he perfects the clip from the cemetery. The image of the crying woman was shot separately from the image of the kneeling man, and he is trying to fuse them.

“I’m spending two hours to get a few seconds just right,” he says.

He occasionally sends a text message or checks Facebook, but he is focused in a way he rarely is when doing homework. He says the chief difference is that filmmaking feels applicable to his chosen future, and he hopes colleges, like the University of Southern California or the California Institute of the Arts in Los Angeles, will be so impressed by his portfolio that they will overlook his school performance.

“This is going to compensate for the grades,” he says. On this day, his homework includes a worksheet for Latin, some reading for English class and an economics essay, but they can wait.

For Vishal, there’s another clear difference between filmmaking and homework: interactivity. As he edits, the windows on the screen come alive; every few seconds, he clicks the mouse to make tiny changes to the lighting and flow of the images, and the software gives him constant feedback.

“I click and something happens,” he says, explaining that, by comparison, reading a book or doing homework is less exciting. “I guess it goes back to the immediate gratification thing.”

The $2,000 computer Vishal is using is state of the art and only a week old. It represents a concession by his parents. They allowed him to buy it, despite their continuing concerns about his technology habits, because they wanted to support his filmmaking dream. “If we put roadblocks in his way, he’s just going to get depressed,” his mother says. Besides, she adds, “he’s been making an effort to do his homework.”

At this point in the semester, it seems she is right. The first schoolwide progress reports come out in late September, and Vishal has mostly A’s and B’s. He says he has been able to make headway by applying himself, but also by cutting back his workload. Unlike last year, he is not taking advanced placement classes, and he has chosen to retake Algebra II not in the classroom but in an online class that lets him work at his own pace.

His shift to easier classes might not please college admissions officers, according to Woodside’s college adviser, Zorina Matavulj. She says they want seniors to intensify their efforts. As it is, she says, even if Vishal improves his performance significantly, someone with his grades faces long odds in applying to the kinds of colleges he aspires to.

Still, Vishal’s passion for film reinforces for Mr. Reilly, the principal, that the way to reach these students is on their own terms.

Hands-On Technology

Big Macintosh monitors sit on every desk, and a man with hip glasses and an easygoing style stands at the front of the class. He is Geoff Diesel, 40, a favorite teacher here at Woodside who has taught English and film. Now he teaches one of Mr. Reilly’s new classes, audio production. He has a rapt audience of more than 20 students as he shows a video of the band Nirvana mixing their music, then holds up a music keyboard.

“Who knows how to use Pro Tools? We’ve got it. It’s the program used by the best music studios in the world,” he says.

In the back of the room, Mr. Reilly watches, thrilled. He introduced the audio course last year and enough students signed up to fill four classes. (He could barely pull together one class when he introduced Mandarin, even though he had secured iPads to help teach the language.)

“Some of these students are our most at-risk kids,” he says. He means that they are more likely to tune out school, skip class or not do their homework, and that they may not get healthful meals at home. They may also do their most enthusiastic writing not for class but in text messages and on Facebook. “They’re here, they’re in class, they’re listening.”

Despite Woodside High’s affluent setting, about 40 percent of its 1,800 students come from low-income families and receive a reduced-cost or free lunch. The school is 56 percent Latino, 38 percent white and 5 percent African-American, and it sends 93 percent of its students to four-year or community colleges.

Mr. Reilly says that the audio class provides solid vocational training and can get students interested in other subjects.

“Today mixing music, tomorrow sound waves and physics,” he says. And he thinks the key is that they love not just the music but getting their hands on the technology. “We’re meeting them on their turf.”

It does not mean he sees technology as a panacea. “I’ll always take one great teacher in a cave over a dozen Smart Boards,” he says, referring to the high-tech teaching displays used in many schools.

Teachers at Woodside commonly blame technology for students’ struggles to concentrate, but they are divided over whether embracing computers is the right solution.

“It’s a catastrophe,” said Alan Eaton, a charismatic Latin teacher. He says that technology has led to a “balkanization of their focus and duration of stamina,” and that schools make the problem worse when they adopt the technology.

“When rock ’n’ roll came about, we didn’t start using it in classrooms like we’re doing with technology,” he says. He personally feels the sting, since his advanced classes have one-third as many students as they had a decade ago.

Vishal remains a Latin student, one whom Mr. Eaton describes as particularly bright. But the teacher wonders if technology might be the reason Vishal seems to lose interest in academics the minute he leaves class.

Mr. Diesel, by contrast, does not think technology is behind the problems of Vishal and his schoolmates — in fact, he thinks it is the key to connecting with them, and an essential tool. “It’s in their DNA to look at screens,” he asserts. And he offers another analogy to explain his approach: “Frankenstein is in the room and I don’t want him to tear me apart. If I’m not using technology, I lose them completely.”

Mr. Diesel had Vishal as a student in cinema class and describes him as a “breath of fresh air” with a gift for filmmaking. Mr. Diesel says he wonders if Vishal is a bit like Woody Allen, talented but not interested in being part of the system.

But Mr. Diesel adds: “If Vishal’s going to be an independent filmmaker, he’s got to read Vonnegut. If you’re going to write scripts, you’ve got to read.”

Back to Reading Aloud

Vishal sits near the back of English IV. Marcia Blondel, a veteran teacher, asks the students to open the book they are studying, “The Things They Carried,” which is about the Vietnam War.

“Who wants to read starting in the middle of Page 137?” she asks. One student begins to read aloud, and the rest follow along.

To Ms. Blondel, the exercise in group reading represents a regression in American education and an indictment of technology. The reason she has to do it, she says, is that students now lack the attention span to read the assignments on their own.

“How can you have a discussion in class?” she complains, arguing that she has seen a considerable change in recent years. In some classes she can count on little more than one-third of the students to read a 30-page homework assignment.

She adds: “You can’t become a good writer by watching YouTube, texting and e-mailing a bunch of abbreviations.”

As the group-reading effort winds down, she says gently: “I hope this will motivate you to read on your own.”

It is a reminder of the choices that have followed the students through the semester: computer or homework? Immediate gratification or investing in the future?

Mr. Reilly hopes that the two can meet — that computers can be combined with education to better engage students and can give them technical skills without compromising deep analytical thought.

But in Vishal’s case, computers and schoolwork seem more and more to be mutually exclusive. Ms. Blondel says that Vishal, after a decent start to the school year, has fallen into bad habits. In October, he turned in weeks late, for example, a short essay based on the first few chapters of “The Things They Carried.” His grade at that point, she says, tracks around a D.

For his part, Vishal says he is investing himself more in his filmmaking, accelerating work with his cousin on their music video project. But he is also using Facebook late at night and surfing for videos on YouTube. The evidence of the shift comes in a string of Facebook updates.

Saturday, 11:55 p.m.: “Editing, editing, editing”

Sunday, 3:55 p.m.: “8+ hours of shooting, 8+ hours of editing. All for just a three-minute scene. Mind = Dead.”

Sunday, 11:00 p.m.: “Fun day, finally got to spend a day relaxing… now about that homework…”

Source | New York Times

Why Life Is Physics, Not Chemistry

Wednesday, November 24th, 2010

In the history of science, there are many examples of simple changes in perspective that lead to profound insights into the nature of the cosmos. The invention of the telescope is perhaps one example. Another is the realisation that chemical energy, thermodynamic energy, kinetic energy and the like are all manifestations of the same stuff. You can surely supply your own favourite instances here.

One of the more important examples in 20th century science is that biology is the result of evolution, not the other way round. By that way if thinking, evolution is a process, an algorithm even; albeit one with unimaginable power. Exploit evolution and there is little you cannot achieve.

In recent years, computer scientists have begun to exploit evolution’s amazing power. One thing they have experienced time and time again is evolution’s blind progress. Put a genetic algorithm to work and it will explore the evolutionary landscape, looking for local minima. When it finds one, there is no knowing whether it is the best possible solution or whether it sits within touching distance of an evolutionary abyss that represents a solution of an entirely different order of magnitude.

That hints at the possibility that life as it has evolved on Earth is but a local minima in a vast landscape of evolutionary possibilities. If that’s the case, biologists are studying a pitifully small fraction of something bigger. Much bigger.

Today, we get an important insight into this state of affairs thanks to a fascinating paper by Nigel Goldenfeld and Carl Woese at the University of Illinois. Goldenfeld is a physicist by training while Woese, also a physicist, is one of the great revolutionary figures in biology. In the 1970s, he defined a new kingdom of life, the Archae, and developed a theory of the origin of life called the RNA world hypothesis, which has gained much fame or notoriety depending on your viewpoint.

Together they suggest that biologists need to think about their field in a radical new way: as a branch of condensed matter physics. Their basic conjecture is that life is an emergent phenomena that occurs in systems that are far out of equilibrium. If you accept this premise, then two questions immediately arise: what laws describe such systems and how are we to get at them.

Goldenfeld and Woese say that biologists’ closed way of thinking on this topic is embodied by the phrase: all life is chemistry. Nothing could be further from the truth, they say.

They have an interesting analogy to help press their case: the example of superconductivity. It would be easy to look at superconductivity and imagine that it can be fully explained by the properties of electrons as they transfer in and out of the outer atomic orbitals. You might go further and say that superconductivity is all atoms and chemistry.

And yet the real explanation is much more interesting and profound. It turns out that many of the problems of superconductivity are explained by a theory which describes the relationship between electromagnetic fields and long range order. When the symmetry in this relationship breaks down, the result is superconductivity.

And it doesn’t just happen in materials on Earth. This kind of symmetry breaking emerges in other exotic places such as the cores of quark stars. Superconductivity is an emergent phenomenon and has little to do with the behaviour of atoms. A chemist would be flabbergasted.

According to Goldenfeld and Woese, life is like superconductivity. It is an emergent phenomenon and we need to understand the fundamental laws of physics that govern its behaviour. Consequently, only a discipline akin to physics can reveal such laws and biology as it is practised today does not fall into this category.

That’s a brave and provocative idea that may not come as a complete surprise to the latest generation of biophysicists. For the others, it should be a call to arms.

We’ll be watching the results with interest.

Source | Technology Review

Supercomputers ‘will fit in a sugar cube’, IBM says

Friday, November 19th, 2010

A pioneering research effort could shrink the world’s most powerful supercomputer processors to the size of a sugar cube, IBM scientists say.

The approach will see many computer processors stacked on top of one another, cooling them with water flowing between each one.

The aim is to reduce computers’ energy use, rather than just to shrink them.

Some 2% of the world’s total energy is consumed by building and running computer equipment.

Speaking at IBM’s Zurich labs, Dr Bruno Michel said future computer costs would hinge on green credentials rather than speed.

Dr Michel and his team have already built a prototype to demonstrate the water-cooling principle. Called Aquasar, it occupies a rack larger than a refrigerator.

IBM estimates that Aquasar is almost 50% more energy-efficient than the world’s leading supercomputers.

“In the past, computers were dominated by hardware costs – 50 years ago you could hold one transistor and it cost a dollar, or a franc,” Dr Michel told BBC News.

Now when the sums are done, he said, the cost of a transistor works out to 1/100th of the price of printing a single letter on a page.

Now the cost of the building the next generation of supercomputers is not the problem, IBM says. The cost of running the machines is what concerns engineers.

“In the future, computers will be dominated by energy costs – to run a data centre will cost more than to build it,” said Dr Michel.

The overwhelming cause of those energy costs is in cooling, because computing power generates heat as a side product.

Cube route

“In the past, the Top 500 list (of fastest supercomputers worldwide) was the important one; computers were listed according to their performance.

“In the future, the ‘Green 500′ will be the important list, where computers are listed according to their efficiency.”

Until recently, the supercomputer at the top of that list could do about 770 million computational operations per second at a cost of one watt of power.

The Aquasar prototype clocked up nearly half again as much, at 1.1 billion operations per second. Now the task is to shrink it.

“We currently have built this Aquasar system that’s one rack full of processors. We plan that 10 to 15 years from now, we can collapse such a system in to one sugar cube – we’re going to have a supercomputer in a sugar cube.”

Mark Stromberg, principal research analyst at Gartner, said that the approach was a promising one.

But he said that tackling the finer details of cooling – to remove heat from just the right parts of the chip stacks – would take significant effort.

Third dimension

It takes about 1,000 times more energy to move a data byte around than it does to do a computation with it once it arrives. What is more, the time taken to complete a computation is currently limited by how long it takes to do the moving.

Air cooling can go some way to removing this heat, which is why many desktop computers have fans inside. But a given volume of water can hold 4,000 times more waste heat than air.

However, it adds a great deal of bulk. With current technology, a standard chip – comprising a milligram of transistors – needs 1kg of equipment to cool it, according to Dr Michel.

Part of the solution he and his colleagues propose – and that the large Aquasar rack demonstrates – is water cooling based on a slimmed-down, more efficient circulation of water that borrows ideas from the human body’s branched circulatory system.

However, the engineers are exploring the third dimension first.

They want to stack processors one on top of another, envisioning vast stacks, each separated by water cooling channels not much more than a hair’s breadth in thickness.

Because distance between processors both slows down and heats up the computing process, moving chips closer together in this way tackles issues of speed, size, and running costs, all at once.

In an effort to prove the principle the team has built stacks four processors high. But Dr Michel concedes that much work is still to be done.

The major technical challenge will be to engineer the connections between the different chips, which must work as conductors and be waterproof.

“Clearly the use of 3D processes will be a major advancement in semiconductor technology and will allow the industry to maintain its course,” Gartner’s Mark Stromberg told the BBC.

“But several challenges remain before this technology can be implemented – issues concerning thermal dissipation are among the most critical engineering challenges facing 3D semiconductor technology.”

Source | BBC News

This Professor Is Surgically Implanting a Camera in the Back of His Head

Friday, November 19th, 2010

When Waffa Bilal turns around for the next year, a camera surgically embedded in the back of his head will stare back at you.

The camera, which will be stuck to the NYU photo professor’s head via a “piercing-like attachment” when he goes through surgery in the next couple of weeks according to the WSJ, will take photos every 60 seconds and beam them to monitors at the Mathaf: Arab Museum of Modern Art in Qatar.

The artwork’s called “The 3rd I,” and it’ll be ongoing for an entire year. While it’s mostly intended as a comment on memory and experience—something that’s changed immensely with the advent of digital storage and the possibilites of limitless memory—interestingly, it’s mostly sparking a debate about privacy: When Bilal’s on campus at NYU, where he’ll be actively teaching, he’s going to keep the camera covered with a lens cap. The thing is, it’s not so far from the realm of possibility that we’ll all be recording nearly every moment of our lives in the not-too-distant future—Microsoft’s already got a camera that tries to.

Source | Gizmodo

Organs Made from Scratch

Friday, November 19th, 2010

Growing living tissue and organs in the lab would be a life-saving trick. But replicating the complexity of an organ, by growing different types of cells in precisely the right arrangement—muscle held together with connective tissue and threaded with blood vessels, for example—is currently impossible. Researchers at MIT have taken a step toward this goal by coming up with a way to make “building blocks” containing different kinds of tissue that can be put together.

Embryonic stem cells can turn into virtually any type of cell in the body. But controlling this process, known as differentiation, is tricky. If embryonic stem cells are left to grow in a tissue-culture dish, they will differentiate more or less at random, into a mixture of different types of cells.

The MIT group, led by Ali Khademhosseini, an assistant professor in the Harvard-MIT division of Health Sciences and Technology and a recipient of a TR35 award in 2007, put embryonic stem cells into “building blocks” containing gel that encouraged the cells to turn into certain types of cell. These building blocks can then be put together, using techniques developed previously by Khademhosseini, to make more complex structures. The gel degrades and disappears as the tissue grows. Eventually, the group hopes to make cardiac tissue by stacking blocks containing cells that have turned into muscle next to blocks containing blood vessels, and so forth.

The researchers expose clusters of stem cells called embryoid bodies to a physical environment that mimics some of the cues the cells experience during embryonic development. “In an attempt to recreate that polarity, we applied microfabrication technologies to stem-cell engineering,” says Khademhosseini.

The team first puts embryoid bodies into microscale wells, which causes the cells to clump together to form spheres. Next they pour a light-sensitive hydrogel solution over the top of the cells. When this solution is exposed to light, it hardens, leaving behind a sphere of cells, half naked, half encased in a cube of gel. The process is repeated to encase the other half in a second type of gel. The result is a hydrogel block, half gelatin, half polyethylene glycol, with a sphere of embryonic stem cells inside.

Khademhosseini’s group found that within an individual embryoid body, cells on the squishier, gelatin side took a different path from cells on the polyethylene glycol side. The gelatin is easier for the cells to push into, and this affects how they grow, directing them to become blood vessels. “They completely remodel the side that’s gelatin, digging through the gel, elongating, and forming blood-vessel-like sprouts,” says Khademhosseini. These cells also express chemical markers typical of blood-vessel precursor cells, called endothelial cells. The cells on the other side differentiated in a more chaotic manner. The researchers also watched what happened when they varied the molds to create gel blocks that contained more or less gelatin.

Khademhosseini hopes to further test the effects of different hydrogels. He also plans to embed different development-stimulating chemicals within the gels. Using chemical signals to influence stem-cell differentiation is a common approach, but controlling which parts of a group of cells are exposed to which chemical signals has been difficult. Other groups have used microfluidics devices to feed different chemicals to cells. Khademhosseini believes using the hydrogel will be easier.

“This is a creative new way to guide stem cell behavior using patterned hydrogels,” says Sarah Heilshorn, assistant professor of materials science and engineering at Stanford University. She says the most innovative aspect of the work is the ability to quickly make large numbers of the cell constructs. “This approach could be applied to a broad range of other biomaterials and cell types.”

Khademhosseini’s ultimate goal is to build cardiac tissue from the bottom up. “We’d like to seed cells to pattern branching vasculature through cardiac tissue,” he says. The multimaterial gel structures, he says, “can be the modules of our self-assembling cellular structures,” he says.

Source | Technology Review

ARMAR-III, the robot that learns via touch (w/ Video)

Friday, November 19th, 2010

Researchers in Europe have created a robot that uses its body to learn how to think. It is able to learn how to interact with objects by touching them without needing to rely on a massive database of instructions for every object it might encounter.

The is a product of the Europe-wide PACO-PLUS research project and operates on the principle of “embodied cognition,” which relies on two-way communication between the robot’s sensors in its hands and “eyes” and its processor. Embodied cognition enables AMAR to solve problems that were unforeseen by its programmers

, so when faced with a new task it investigates ways of moving or looking at things until the processor makes the required connections.

AMAR has learned to recognize common objects to be found in a kitchen, such as cups of various colors, plates, and boxes of cereal, and it responds to commands to interact with these objects by fetching them or placing them in a dishwasher, for example. One example of the tasks AMAR has learned to carry out is setting a table, and it is able to do this even if a cup is placed in its way. The robot worked out that the cup was in the way, was movable, and would be knocked over if left in the way, and so it moved the cup out of the way before continuing with its task.

Project coordinator, Tamim Asfour from the Karlsruhe Institute of Technology in Germany, said the robot’s tasks can be broken down into three components: understanding verbal commands, creating representations of objects and actions, and using these to work out how to carry out the command. Asfour said having the robot learn all three by trial and error would have taken too long, so they provided one of the components and the robot worked out the rest. They reducing the trial and error time by giving AMAR hints via programming, by naming objects, and via a demonstrations from a human.




Asfour said the main scientific achievement of the research was to build a system capable of forming representations of objects that worked at the sensory level and combining that with planning and two-way verbal communication.

The type of thinking demonstrated by AMAR mimics the way humans perceive their environment in terms that depend on their ability to interact with it physically, and it is similar to the way babies and young children learn by exploring the world around them and interacting with objects under guidance.

The four-year PACO-PLUS project was funded by the European Commission’s Cognition Unit with the aim of developing increasing advanced robots able to operate in the real world and communicate with humans.

Source | Physorg

New imaging method developed at Stanford reveals stunning details of brain connections

Friday, November 19th, 2010

Researchers at the Stanford University School of Medicine, applying a state-of-the-art imaging system to brain-tissue samples from mice, have been able to quickly and accurately locate and count the myriad connections between nerve cells in unprecedented detail, as well as to capture and catalog those connections’ surprising variety.

A typical healthy human brain contains about 200 billion neurons, linked to one another via hundreds of trillions of synapses. One neuron may make as many as tens of thousands of synaptic contacts with other neurons, said Stephen Smith, PhD, professor of molecular and cellular physiology and senior author of a paper describing the study, published Nov. 18 in Neuron.

Because synapses are so minute and packed so closely together, it has been hard to get a handle on the complex neuronal circuits that do our thinking, feeling and activation of movement.The new method works by combining high-resolution photography with specialized fluorescent molecules that bind to different proteins and glow in different colors. Massive computing power captures this information and converts it into imagery.

Examined up close, a synapse — less than a thousandth of a millimeter in diameter — is a specialized interface consisting of the edges of two neurons, separated by a tiny gap. Chemicals squirted out of the edge of one neuron diffuse across the gap, triggering electrical activity in the next and thus relaying a nervous signal. There are perhaps a dozen known types of synapses, categorized according to the kind of chemical employed in them. Different synaptic types differ correspondingly in the local proteins, on one abutting neuron or the other, that are associated with the packing, secretion and uptake of the different chemicals.

Synapse numbers in the brain vary over time. Periods of massive proliferation in fetal development, infancy and adolescence give way to equally massive bursts of “pruning” during which underused synapses are eliminated, and eventually to a steady, gradual decline with increasing age. The number and strength of synaptic connections in various brain circuits also fluctuate with waking and sleeping cycles, as well as with learning. Many neurodegenerative disorders are marked by pronounced depletion of specific types of synapses in key brain regions.

In particular, the cerebral cortex — a thin layer of tissue on the brain’s surface — is a thicket of prolifically branching neurons. “In a human, there are more than 125 trillion synapses just in the cerebral cortex alone,” said Smith. That’s roughly equal to the number of stars in 1,500 Milky Way galaxies, he noted.

Synapses in the brain are crowded in so close together that they cannot be reliably resolved by even the best of traditional light microscopes, Smith said. “Now we can actually count them and, in the bargain, catalog each of them according to its type.”

Array tomography, an imaging method co-invented by Smith and Kristina Micheva, PhD, who is a senior staff scientist in Smith’s lab, was used in this study as follows: A slab of tissue — in this case, from a mouse’s cerebral cortex — was carefully sliced into sections only 70 nanometers thick. These ultrathin sections were stained with antibodies designed to match 17 different synapse-associated proteins, and they were further modified by conjugation to molecules that respond to light by glowing in different colors.

The antibodies were applied in groups of three to the brain sections. After each application huge numbers of extremely high-resolution photographs were automatically generated to record the locations of different fluorescing colors associated with antibodies to different synaptic proteins. The antibodies were then chemically rinsed away and the procedure was repeated with the next set of three antibodies, and so forth. Each individual synapse thus acquired its own protein-composition “signature,” enabling the compilation of a very fine-grained catalog of the brain’s varied synaptic types.

All the information captured in the photos was recorded and processed by novel computational software, most of it designed by study co-author Brad Busse, a graduate student in Smith’s lab. It virtually stitched together all the slices in the original slab into a three-dimensional image that can be rotated, penetrated and navigated by the researchers.

The Stanford team used brain samples from a mouse that had been bioengineered so that particularly large neurons that abound in the cerebral cortex express a fluorescent protein, normally found in jellyfish, that glows yellowish-green. This let them visualize synapses against the background of the neurons they linked.

The researchers were able to “travel” through the resulting 3-D mosaic and observe different colors corresponding to different synaptic types just as a voyager might transit outer space and note the different hues of the stars dotting the infinite blackness. A movie was also created by this software.

This level of detailed visualization has never been achieved before, Smith said. “The entire anatomical context of the synapses is preserved. You know right where each one is, and what kind it is,” he said.

Observed in this manner, the brain’s overall complexity is almost beyond belief, said Smith. “One synapse, by itself, is more like a microprocessor —with both memory-storage and information-processing elements — than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth,” he said.

In the course of the study, whose primary purpose was to showcase the new technique’s application to neuroscience, Smith and his colleagues discovered some novel, fine distinctions within a class of synapses previously assumed to be identical. His group is now focused on using array tomography to tease out more such distinctions, which should accelerate neuroscientists’ progress in, for example, identifying how many of which subtypes are gained or lost during the learning process, after an experience such as traumatic pain, or in neurodegenerative disorders such as Alzheimer’s. With support from the National Institutes of Health, Smith’s lab is using array tomography to examine tissue samples from Alzheimer’s brains obtained from Stanford and the University of Pennsylvania.

“I anticipate that within a few years, array tomography will have become an important mainline clinical pathology technique, and a drug-research tool,” Smith said. He and Micheva are founding a company that is now gathering investor funding for further work along these lines. Stanford’s Office of Technology Licensing has obtained one U.S. patent on array tomography and filed for a second.

The Neuron study was funded by the NIH, the Gatsby Charitable Trust, the Howard Hughes Medical Institute, Stanford’s Bio-X program and a gift from Lubert Stryer, MD, the emeritus Mrs. George A. Winzer Professor of Cell Biology in the medical school’s Department of Neurobiology. Other Stanford co-authors of the paper were neuroscience graduate student Nicholas Weiler and senior research scientist Nancy O’Rourke, PhD.

Information about the school’s Departments of Neurosurgery and of Neurology and Neurological Science, which also supported the work, is available at http://med.stanford.edu/neurosurgery and http://neurology.stanford.edu.

Source | Stanford University Medical Center

Now I See You

Friday, November 19th, 2010

Weill Cornell Medical College researchers have built a new type of prosthetic retina that enabled blind mice to see nearly normal images. It could someday restore detailed sight to the millions of people who’ve lost their vision to retinal disease.

They used optogenetics, a recently developed technique that infuses neurons with light-sensitive proteins from blue-green algae, causing them to fire when exposed to light.

The researchers used mice that were genetically engineered to express one of these proteins, channelrhodopsin, in their ganglion cells. Then, they presented the mice with an image that had been translated into a grid of 6,000 pulsing lights. Each light communicated with a single ganglion cell, and each pulse of light caused its corresponding cell to fire, thus transmitting the encoded image along to the brain.

In humans, such a setup would require a pair of high-tech spectacles, embedded in which would be a tiny camera, an encoder chip to translate images from the camera into the retinal code, and a miniature array of thousands of lights. When each light pulsed, it would trigger a channelrhodopsin-laden ganglion cell. Surgery would no longer be required to implant an electron array deep into the eye, although some form of gene therapy would be required in order for patients to express channelrhodopsin in their retinas.

Source | Cornell Medical College

Storytelling 2.0: Open your books to augmented reality

Friday, November 19th, 2010

In today’s world, suffused with technology’s blue glow, books hearken back to a time when thoughts were more linear than they are in these hyperlinked days. A mere collection of bound pages may no longer suffice for entertainment in the information age. That is where augmented reality (AR) books come in. We are talking books, plus.

Plus what exactly? Most commonly, extra visuals. The standard visual method of augmentation is to use a web cam and custom software to make animations appear on a live screen image of a book. This year saw the launch of a few commercially available AR books, such as Fairyland Magic from Carlton Books and Tyrone the Clean’o'saurus from Salariya Publishing. These books, aimed at children, overlay pages with 3D images. An enthusiast for children’s books myself, I decided to try them out.

Fairyland Magic, like the other AR books in Carlton’s catalogue, was commissioned and written in the usual way, with the animations added on afterwards as a lure to encourage children into books. Though visually pleasing, the computer visualisations require a high-end computer. Even when accessible, the clunkiness of the graphics, the requirement for dextrous wielding of the book in order to make them appear and the fact that the book on screen is a mirror image, making the text appears backwards, meant for me that, while novel, they didn’t add much to my experience of the book.

In contrast, the Salariya titles were built around the AR concept, the technology central to them from the outset. The animations are less temperamental, lengthier and incorporate more movement, all of which serves to bring the characters to life and give them extra personality. Sadly, there is only one animation per book, on the final page. When we tested out Tyrone on the 7 and 9-year-old children available to our editorial department, we were told that the animation was “great” and “cool”. However, their efforts to make the virtual Tyrone fall off his virtual carpet resulted in the software crashing, and our young guinea-pigs quickly got bored and wandered off.

For those of us a little old for these offerings, there are some less commercial AR books out there to fit the bill. Back in 2008, artist Camille Scherrer developed her book Souvenirs du Monde des Montagnes, along with software that broke the mould by using a webcam to recognise the content of the pages in order to correctly place the animated overlays. Scherrer’s book, like those of Salariya, was developed in conjuction with the augmentation. The book itself is a fairytale built around an archive of family photographs from the early part of last century. The animations dance across the pages and then off the book into the surroundings.

At present, all the AR books suffer from the same issue: the animations indiscriminately overlay the webcam input. According to Sherrer, the technology has progressed and the animations can now interact with the readers’ hands. However, she is not sure that the audience is ready for that. “What is funny is that for the public, nobody is impressed by the animation going under the fingers because it seems natural. They are more impressed by the old one where the animation goes over the fingers. I think the technology is going too fast for the public,” she says. “Maybe in five years I will make a book where some animations go under the reader’s hands and some go over – to create layers.”

Additional audio is more or less a part of each of these AR books. However, in its own right audio augmentation can be a powerful tool, as artist Yuri Suzuki has demonstrated. His Barcode Book uses a simple scanner to read barcodes incorporated into the book’s artwork, triggering a related audio playback. In his work with Oscar Diaz, REC&PLAY, he uses old cassette technology to add sound to the very ink on the page. The recording pen, complete with microphone and cassette write head, lays down a layer of ferromagnetic ink – made from heat-treated ferrous oxide, the same material used in cassettes – all the while recording your voice onto the page. The message can then be read out by a second pen with a cassette read head, and a speaker at the other end.





Limiting augmentation to audio overcomes one of the bigger problems faced by visual AR book designers – that of reliance on a computer and screen. Scherrer has gone to some lengths to create as seamless a user interface as possible with Souvenirs, notably by hiding the webcam in a lamp by the light of which the book is read. Scherrer’s method perhaps achieves integration most successfully, but the ultimate AR experience is far from being realised. As Sherrer says, “I would like to make some projection onto the book – the screen for me is a barrier”.

The books available at present do indeed have an added “wow factor” that must not be underestimated, especially when it is used to good effect to enhance the book’s narrative, but in a set-up where the book and augmentation appear on a screen there is a fundamental, jarring discontinuity that detracts from the magic of the experience. Perhaps this will become less important as we become more used to mainstream AR – books or otherwise. At present, AR books and humanity are both still evolving, and in the future the twain shall successfully meet.

Source | New Scientist