Archive for September, 2010

Five Ways Machines Could Fix Themselves

Wednesday, September 29th, 2010

5waysimage.jpg

As I see cooling fans die and chips fry… as I see half the machines in a laundry room decay into despondent malfunctioning relics… as my car invents new threats every day along the theme of catastrophic failure, and as I hear the horrific clunk of a “smart” phone diving into the sidewalk with a wonderful chance of breakage, I wonder why we put up with it.  And why can’t this junk fix itself?

Design guru and psychologist Donald A. Norman has pointed out how most modern machines hide their internal workings from users.  Any natural indicators, such as mechanical sounds, and certainly the view of mechanical parts, are muffled and covered.  As much machinery as possible has been replaced by electronics that are silent except for the sound of fans whirring.  And electronics are even more mysterious to most users than mechanical systems.

Our interfaces with machines are primarily composed of various kinds of transducers (like buttons), LEDs (those little glowing lights), and display screens.  We are, at the very least, one — if not a dozen — degrees removed from the implementation model.  As someone who listens to user feedback, I can assure you that a user’s imagining of how a system works is often radically different than how it really works.

Yet with all this hiding away of the dirty reality of machinery, we have not had a proportional increase in machine self-support.

Argument: Software, in some cases, does fix itself.  Specifically, I am thinking about automatic or pushed software updates.  And, because that software runs on a box, it is also fixing a machine.  For instance, console game platforms like XBox 360 and Playstation 3 receive numerous updates for bug fixes, enhancements, and game specific updates.  Likewise, with some manual effort from the user, smart phones and even cars can have their firmware updated to get bug fixes and new features (or third-party hacks).

Counterargument: Most machines don’t update their software anywhere close to “automatically.”  And none of those software updates actually fix physical problems.  Software updates also require a minimal subset of the system to be operational, which isn’t always the case.  The famous Red Ring of Death on the early XBox 360 units could not be fixed except via replacement of hardware.  You might be able to flash your car’s engine control unit with new software, but that won’t fix mechanical parts that are already broken.  And so on.

Another argument: Many programs and machines can “fail gracefully.”  This phrase comforts a user like the phrase “controlled descent into the terrain” comforts the passenger of an airplane.  However, it’s certainly the minimum bar that our contraptions should aim for.  For example, if the software fails in your car, it should not default to maximum throttle, and preferably it would be able to limp to the nearest garage just in case your cell phone is dead.  Another example: I expect my laptop to warn me, and then shutdown, if the internal temperature is too hot, as opposed to igniting the battery into a fireball.

The extreme solution to our modern mechatronic woes is to turn everything into software.  If we made our machines out of programmable matter or nanobots that might be possible.  Or we could all move into virtual realities, in which we have hooks for the meta — so a software update would actually update the code and data used to generate the representation of a machine (or any object) in our virtual world.

However, even if those technologies become mature, there won’t necessarily be one that is a monopoly or ubiquitous.  A solution that is closer and could be integrated into current culture would be a drop-in replacement that utilizes existing infrastructures.

Some ideas that come close:

1. The device fixes itself without any external help.  This has the shortcoming that it might be too broken to fix itself — or it might not realize it’s broken.  In some cases, we already have this in the form of redundant systems as used in aircraft, the Segway, etc.

2. Software updating (via the Internet) combined with 3D printing machines. The 3D printers would produce replacement parts.  The printer, of course, needs raw material but that could be as easy as putting paper in a printer.  Perhaps, in the future, that raw printer material will become some kind of basic utility, like water and Internet access.

3. Telepresence combined with built-in repair arms (aka “waldoes”).  Many companies are currently trying to productize office-compatible telepresence robots.  Doctors already use teleoperated robots like Da Vinci to do remote, minimally-invasive surgery.  Why not operate on machines?  How to embed this system into a room and/or within a machine is another — quite major — problem.  Fortunately, with the miniaturization of electronics, there might be room for new repair devices embedded in some products.  And certainly not all products need general purpose manipulator arms.  They could be machine specific devices, designed to repair the highest probability failures.

4. Autonomous telepresence combined with built-in repair arms. A remote server connects to the local machine via the Internet using the built-in repair arms or device-specific repair mechanism.  However, we also might need an automatic meta-repair mechanism.  In other words, the fixer itself might break, or the remote server might crash.  Now we enter endless recursions.  However, this need not go on infinitely.  It’s just a matter of having enough self-repair capacity to achieve some threshold of reliability.

5. Nothing is ever repaired, just installed.  A FedEx robot appears within fifteen minutes with a replacement device and for an extra fee will set it up for you.

Source | H+ Magazine 

Google’s Schmidt: Computers ‘Augmenting’ Humanity

Wednesday, September 29th, 2010

SAN FRANCISCO – More and more, computers will serve to “augment humanity” by filtering and directing relevant information to users, Google chief executive Eric Schmidt said Tuesday.

In a speech at the TechCrunch Disrupt conference here, Schmidt reiterated several points that he made last Friday night in an appearance on the “Charlie Rose” talk show: that computers can assist humans, and that cloud computing is “the magic” that allows mobile devices to perform much more powerfully than normal.

“One way to think about this is we’re trying to make people better people, literally give them better ideas, [and] augmenting their experience,” he told Rose. “Think of it as augmented humanity – think of it as trying to get the computers to help us at the things we’re not very good at and have us help the computers do the things they’re not very good at. Computers, of course, remember everything, and so now it’s so overwhelming you need a search engine to keep track.”

On Tuesday, Schmidt called the smartphone the “defining iconic device of its time,” noting that the expected leap to LTE, a next-generation cellular standard that will allow multi-megabit throughput, will allow more data to be passed back and forth.

Already, Schmidt said that search traffic tripled throughout the first half of 2010, and highlighted Google Goggles and Google Translate as two services that can use the smartphone as a sensor, passing information up to the service that’s stored in the cloud.

“To me, this is just the stuff of science fiction,” Schmidt said Tuesday.

Schmidt didn’t announce any new products or initiatives during his speech, and declined to comment on “Google Me,” the name of Google’s “rumored” social service, in Schmidt’s words.

In his interview with Rose, Schmidt said the company is “building social information into all of our products. So it won’t be a social network the way people think of Facebook, but rather social information about who your friends are, people you interact with and we have various ways we’ll be collecting that information.”

“It’s the very early stages of our social strategy,” said Bradley Horowitz, the vice president of product at Google, in a subsequent panel.

Schmidt did say that Google continues to work on an “infrastructure for health questions where you [users] give us health information,” which might have been a reference to the Google Health project that Google launched in 2008. Google receives between 3 to 5 percent of its queries on health-related questions, and Google brought in a team of doctors to prioritize its search responses. The need to integrate with XML-based legacy hospital IT systems is “just torturous,” Schmidt said.

As he did with Rose, Schmidt reiterated that Google can provide important context and recommendations, but only if customers sign in and agree to provide information. They can always opt out, he said. He also claimed that Google champions the “openness of the Web,” while Apple promotes “closedness,” he said.

Source | PC Magazine

Nanostructuring Technology Creates Energy Efficient and Ultra-Small Displays

Wednesday, September 29th, 2010

100921-f-9894c-001-4.jpg

University of Michigan scientists have created the smallest pixels available that will enable LED, projected, and wearable displays to be more energy-efficient with more light manipulation possible, all on a display that may eventually be as small as a postage stamp.

This latest nanostructuring technology for the Air Force includes a new color filter made of nano-thin sheets of metal-dielectric-metal stack, which have perfectly-shaped slits that act as resonators. They trap and transmit light and transform the pixels into effective color filtering elements.

The pixels created from this technology are ten times smaller than what are now on a computer monitor and eight times smaller than ones on a smart phone. They use existing light more effectively and make it unnecessary to use polarizing layers for liquid crystal displays (LCDs). They enable the backlighting on the LED to be used more efficiently.

Prior to this technology, LCDs had two polarizing layers, a color filter sheet, two layers of electrode-laced glass and a liquid crystal layer, but only about five percent of the backlighting reached the viewer.

Th research, funded by the Air Force Office of Scientific Research (AFOSR), was developed by Dr. Jay Guo, associate professor in the Department of Electrical Engineering and Computer Science at the University of Michigan and his graduate student researchers, Ting Xu, Yi-Kuei Wu and collaborator Dr. Xiangang Luo.

The research exploits nanophotonic devices using plasmonic structures.”Most of the applications of the new technology suffer from the absorption loss by the presence of metal structure that is an integral part of the plasmonic devices,” said Guo.

However, the loss in structure can be managed to produce useful devices that are valuable to the Air Force which is considering the technology to be used as part of virtual displays integrated to pilots’ windshields.

In the near future, the scientists are expecting to use nanoimprint lithography to begin making the next generation of color filters.

“We hope to show that the fabrication of these structures can be scaled up to large areas and can be very cost effective,” said Guo.

According to Dr. Gernot Pomrenke, the AFOSR program manager overseeing Guo’s research, many defense and aerospace applications require unique imaging techniques and compact systems. He noted that over the last several years plasmonics has become a significant research area to explore new capabilities for such systems.

“This research group has been able to harness light more effectively through their approach and bring benefits to sensing application,” Pomrenke said. “Prof. Guo has also been a leader in nanoimprint lithography to more rapidly create the smaller patterns and structures for more cost-effective system manufacturing and integration.”

Source | Air Force Office of Scientific Research

Raytheon Unveils 2nd Generation Exoskeleton Robotic Suit

Wednesday, September 29th, 2010

Raytheon has unveiled its second-generation exoskeleton (XOS 2), essentially a wearable robotics suit. XOS 2 is lighter, stronger and faster than its predecessor, yet it uses 50 percent less power, and its new design makes it more resistant to the environment.

The wearable robotics suit is being designed to help with the many logistics challenges faced by the military both in and out of theater. Repetitive heavy lifting can lead to injuries, orthopedic injuries in particular. The XOS 2 does the lifting for its operator, reducing both strain and exertion.

It also does the work faster. One operator in an exoskeleton suit can do the work of two to three soldiers. Deploying exoskeletons would allow military personnel to be reassigned to more strategic tasks. The suit is built from a combination of structures, sensors, actuators and controllers, and it is powered by high pressure hydraulics.

“With the popularity of the Iron Man movies, people wonder if I feel like Iron Man when I suit up,” said Rex Jameson, Raytheon Sarcos Test Engineer for XOS 2. “I usually tell them that I can’t speak for Tony Stark, but when I’m in the suit I feel like me, only a faster, stronger version of me.”

Representatives from Paramount Home Entertainment, including the actor Clark Gregg (aka Agent Phil Coulson of the Marvel Movie franchise) were in attendance to capture footage of XOS 2 to include in a video that’s being produced to support the release of Iron Man 2 on DVD and Blu ray.





Source | Raytheon News

Revolutionary horizontal space launcher proposed by NASA

Wednesday, September 15th, 2010

maglev.jpgNASA is considering a revolutionary new horizontal rail launcher concept.

An early proposal calls for a wedge-shaped aircraft with scramjets to be launched horizontally on an electrified (magnetic levitation, or maglev) track or gas-powered sled. The aircraft would fly up to Mach 10, using the scramjets and wings to lift it to the upper reaches of the atmosphere, where a small payload canister or capsule similar to a rocket’s second stage would fire off the back of the aircraft and into orbit.

Engineers also contend the system, with its advanced technologies, will benefit the nation’s high-tech industry by perfecting technologies that would make more efficient commuter rail systems, better batteries for cars and trucks, and numerous other spinoffs.

NASA’s Stan Starr, branch chief of the Applied Physics Laboratory at Kennedy, points out that nothing in the design calls for brand-new technology to be developed. However, the system counts on a number of existing technologies to be pushed forward. ”All of these are technology components that have already been developed or studied,” Starr said. “We’re just proposing to mature these technologies to a useful level, well past the level they’ve already been taken.”

For example, electric tracks catapult rollercoaster riders daily at theme parks. But those tracks call for speeds of a relatively modest 60 mph — enough to thrill riders, but not nearly fast enough to launch something into space. The launcher would need to reach at least 10 times that speed over the course of two miles in Starr’s proposal.

The studies and development program could also be used as a basis for a commercial launch program if a company decides to take advantage of the basic research NASA performs along the way. Starr said NASA’s fundamental research has long spurred aerospace industry advancement, a trend that the advanced space launch system could continue.

For now, the team proposed a 10-year plan that would start with launching a drone like those the Air Force uses. More advanced models would follow until they are ready to build one that can launch a small satellite into orbit.

maglev2.jpgEarly designs envision a 2-mile-long track at Kennedy Space Center.

“It would be far better and more efficient to place the mag-lev track at much higher altitude and run it through a vacuum tunnel inside a mountain to eliminate air drag,” Dr. Eric W. Davis, Senior Research Physicist at the Institute for Advanced Studies at Austin, told KurzweilAI.

“Launching from higher altitude equals far less fuel to be carried by the second stage booster that rockets the hypersonic space plane into orbit.  You could probably drop 20% to 30% of the fuel requirement.” Davis is co-author of Frontiers of Propulsion Science, published by the American Institute of Aeronautics and Astronautics.

Source | NASA News

Emotiv EPOC EEG Headset Hacked

Wednesday, September 15th, 2010

emotiv-headset.jpgAn Interview with Cody Brocious Cody Brocious has created Cody’s Emokit project, an open source library for reading data directly from the Emotiv EPOC EEG headset.  The Emotiv headset is a consumer EEG headset.  In common slang, it’s a brain-computer interface.  When you buy an Emotiv headset, you are told only to use Emotiv software with the device.  Now, Emokit shakes up the status quo.

H+: So, why did you get into Emotiv hardware in the first place?

CODY BROCIOUS: A consumer-grade EEG headset is a game changer.  In a lot of ways, the Emotiv EPOC is really novel.  We have projects like OpenEEG where people can build these, but you have to invest $750 or more just to get something functional.  At that point, there’s not much you can do unless you are into research.  The average consumer isn’t going to be processing raw brain data.  Since the Emotiv is at the right price point, what it does — and this is what’s big about it — makes it so that consumers can pick it up and start using cool apps that developers make up.  That’s quite new in the brain-computer interface space.  And in an open development environment, there are some really cool apps — some that we can’t imagine — and I think this is going to lead to something that we’ve never seen before.  We’ve never had access to this equipment at the consumer price-point before, and now with Emokit and OpenViBE, there are a lot of possibilities for apps, from controlling your music with an Apple iPod in your pocket, or even robotics research.  We’ve never really had this before.  You can even imagine this starting to work with smartphones like the Apple iPhone or Android phones.

H+:  How does Emokit work?

CB: Emokit — there’s not much to it.  The library itself is dirt simple.  Emokit proper — the actual library — talks to the EEG device using pywinusb and python-hid, depending on whether you are on Windows, OS X or Linux.  Physically, the setup involves the Emotiv device, which transmits data over bluetooth, and the bluetooth dongle.  Emokit gets a connection to the bluetooth-enabled device using a standard HID interface, and once it gets a connection, it gets 32 byte reports from the device that are encrypted.  It decrypts them using standard AES, and once it decrypts them, it parses out the gyro data, a counter which seems to be for timing and it also parses out the actual sensor data.  And then it just sends it to a queue that you can read from whatever you want, like a rendering interface.  At the moment I am rendering with pygame, which beginning python programmers use to write video games.

H+: How would people use it?

CB:  You instantiate the Emotiv object from Emokit, set the ID for the headset, then there’s a function that gives you a generator.  You iterate over the generator to pop off data.

H+: That sounds simple.

CB: Yeah.

H+: What has Emotiv’s response been so far?

CB: I posted the announcement copy on their forums.  They took it down within 2 hours, give or take a few minutes.  There’s been a lot of hits.  On the forum thread at the Emotiv forum there were 30 views by the time it was taken down.  And then it was deleted.  I figured most likely these people are viewing other news sites and they are going to hear about it that way, so it’s not a big deal.  Overall, their forum moderator’s response is fairly typical.

H+: How do you think Emotiv will respond?

CB:  As for how Emotiv is going to respond to any of this, there are a few different possibilities.  They will probably just ignore Emokit.  There is the possibility that they decide that fighting it in any way (like by ignoring it, which is a passive method of fighting) is not the way to go.  As a result, they might relax the reigns on their software, or they might even put themselves behind this in some way.  One way they might do that is by linking to it, in some way, or they might get involved in development.  I find that scenario doubtful.  There’s also the possibility that they will try to litigate or strong-arm me into taking it down, which won’t happen.  They may well try, but it won’t be going down.  They can do whatever they feel like, of course.

It’s clear that Emotiv is a company in the consumer brain-computer interface market.  Basically, here’s what I would tell Emotiv.  From a technical standpoint, the combination of Emokit and the OpenViBE toolkit is pretty ideal.  OpenViBE is well known.  It’s well respected and it’s quite established.  From a business standpoint, Emotiv is spending a lot of money on software and they are limiting themselves as to who is going to buy it right now with their price point.  There’s no way that open source developers will spend $750 for a toolkit that they can’t talk about.  The lower price point is far better from a business standpoint because a lot more people are going to buy it if people can hack it and play around with it.  A lot of people have been telling me that Emokit is exactly what they wanted, and they are now going to go buy an Emotiv.  They’ve been holding out on buying Emotiv units.

Overall, the entire Emotiv EPOC system is very basic.  Another company can come along and replace Emotiv pretty easily.  They are missing out on a huge opportunity here, and time and again we see companies locking stuff down instead of harnessing developers.  People like us are pushing this out to people, saying, “Hey go buy this, it’s cool, look at what you can do with it.”  And the companies are missing out entirely on all of the benefits of social media and collaboration.  Their current business plan is poor, to put it nicely.  I think they are totally missing out.

H+: Who should buy an Emotiv?

CB:  That’s a good question.  At this point, if the Emokit project goes the way I want it to in the next week or two — then anyone who is interested in trying something new, interested in experimenting with something new, should pick it up.  This includes budding transhumanists.  Once the OpenViBE stuff is in place, it’s going to be wide open for hacking, by anyone.

H+: Who would be interested in using Emokit?

CB: Emokit is really simple.  The target programmer is just about the average programmer.  Your average programmer can understand the internals of Emokit if they wanted to.  I think that people are going to use the OpenViBE module to talk to the Emotiv, and once that module is in place, you can use the drag-and-drop GUI to do things like raising and dropping a TIE-fighter.  So that’s cool.  The whole point of using OpenViBE though is so that you can pipe the Emotiv into any other software trivially, like games.  Basically it’s like adding in new keyboard shortcuts or hotkeys.  Once you have the Emotiv module for OpenViBE, there’s a ton that you can do with drag-and-drop programming and tying it into other code using a simple protocol.  I bet that most people are going to use it that way.

Source | H+ Magazine

‘Solar funnel’ concentrates solar energy 100 times

Wednesday, September 15th, 2010

solarfunnel.jpgUsing  carbon nanotubes, MIT chemical engineers have found a way to concentrate solar energy 100 times more than a regular  photovoltaic cell. Such nanotubes could form antennas that capture and focus light energy, potentially allowing much smaller and more powerful solar arrays.

Solar cells are usually grouped in large arrays, often on rooftops, because each cell can generate only a limited amount of power. However, not every building has enough space for a huge expanse of solar panels.

“Instead of having your whole roof be a photovoltaic cell, you could have little spots that were tiny photovoltaic cells, with antennas that would drive photons into them,” says Michael Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering and leader of the research team.

Strano and his students describe their new carbon nanotube antenna, or “solar funnel,” in the Sept. 12 online edition of the journal Nature Materials. Lead authors of the paper are postdoctoral associate Jae-Hee Han and graduate student Geraldine Paulus.

Their new antennas might also be useful for any other application that requires light to be concentrated, such as night-vision goggles or telescopes. The work was funded by a National Science Foundation Career Award, a Sloan Fellowship, the MIT-Dupont Alliance and the Korea Research Foundation.

How the nanotube antenna works

Solar panels generate electricity by converting photons (packets of light energy) into an electric current. Strano’s nanotube antenna boosts the number of photons that can be captured and transforms the light into energy that can be funneled into a solar cell.

The antenna consists of a fibrous rope about 10 micrometers (millionths of a meter) long and four micrometers thick, containing about 30 million carbon nanotubes. Strano’s team built, for the first time, a fiber made of two layers of nanotubes with different electrical properties — specifically, different  bandgaps.

In any material, electrons can exist at different energy levels. When a photon strikes the surface, it excites an electron to a higher energy level, which is specific to the material. The interaction between the energized electron and the hole it leaves behind is called an exciton, and the difference in energy levels between the hole and the electron is known as the bandgap.

The inner layer of the antenna contains nanotubes with a small bandgap, and nanotubes in the outer layer have a higher bandgap. That’s important because excitons like to flow from high to low energy. In this case, that means the excitons in the outer layer flow to the inner layer, where they can exist in a lower (but still excited) energy state.

Therefore, when light energy strikes the material, all of the excitons flow to the center of the fiber, where they are concentrated. Strano and his team have not yet built a photovoltaic device using the antenna, but they plan to. In such a device, the antenna would concentrate photons before the photovoltaic cell converts them to an electrical current. This could be done by constructing the antenna around a core of semiconducting material.

The interface between the semiconductor and the nanotubes would separate the electron from the hole, with electrons being collected at one electrode touching the inner semiconductor, and holes collected at an electrode touching the nanotubes. This system would then generate electric current. The efficiency of such a solar cell would depend on the materials used for the electrode, according to the researchers.

Strano’s team is the first to construct nanotube fibers in which they can control the properties of different layers, an achievement made possible by recent advances in separating nanotubes with different properties. “It shows how far the field has really come over the last decade,” says Michael Arnold, professor of materials science and engineering at the University of Wisconsin at Madison.

Solar cells that incorporate carbon nanotubes could become a good lower-cost alternative to traditional silicon solar cells, says Arnold. “What needs to be shown next is whether the excitons in the inner shell can be harvested and converted to electrical energy,” he says.

While the cost of carbon nanotubes was once prohibitive, it has been coming down in recent years as chemical companies build up their manufacturing capacity. “At some point in the near future, carbon nanotubes will likely be sold for pennies per pound, as polymers are sold,” says Strano. “With this cost, the addition to a solar cell might be negligible compared to the fabrication and raw material cost of the cell itself, just as coatings and polymer components are small parts of the cost of a photovoltaic cell.”

Strano’s team is now working on ways to minimize the energy lost as excitons flow through the fiber, and on ways to generate more than one exciton per photon. The nanotube bundles described in the Nature Materials paper lose about 13 percent of the energy they absorb, but the team is working on new antennas that would lose only 1 percent.

Source | MIT News