Archive for June, 2011
It seems the sci-fi industry has done it again. Predictions made in novels like Johnny Mnemonic and Neuromancer back in the 1980s of neural implants linking our brains to machines have become a reality.
Back then it seemed unthinkable that we’d ever have megabytes stashed in our brain as Keanu Reeves’ character Johnny Mnemonic did in the movie based on William Gibson’s novel. Or that The Matrix character Neo could have martial arts abilities uploaded to his brain, making famous the line, “I know Kung Fu.” (Why Keanu Reeves became the poster boy of sci-fi movies, I’ll never know.) But today we have macaque monkeys that can control a robotic arm with thoughts alone. We have paraplegics given the ability to control computer cursors and wheelchairs with their brain waves. Of course this is about the brain controlling a device. But what about the other direction where we might have a device amplifying the brain? While the cochlear implant might be the best known device of this sort, scientists have been working on brain implants with the goal to enhance memory. This sort of breakthrough could lead to building a neural prosthesis to help stroke victims or those with Alzheimer’s. Or at the extreme, think uploading Kung Fu talent into our brains.
Decade-long work led by Theodore Berger at University of Southern California, in collaboration with teams from Wake Forest University, has provided a big step in the direction of artificial working memory. Their study is finally published today in the Journal of Neural Engineering. A microchip implanted into a rat’s brain can take on the role of the hippocampus—the area responsible for long-term memories—encoding memory brain wave patterns and then sending that same electrical pattern of signals through the brain. Back in 2008, Berger told Scientific American, that if the brain patterns for the sentence, “See Spot Run,” or even an entire book could be deciphered, then we might make uploading instructions to the brain a reality. “The kinds of examples [the U.S. Department of Defense] likes to typically use are coded information for flying an F-15,” Berger is quoted in the article as saying.
In this current study the scientists had rats learn a task, pressing one of two levers to receive a sip of water. Scientists inserted a microchip into the rat’s brain, with wires threaded into their hippocampus. Here the chip recorded electrical patterns from two specific areas labeled CA1 and CA3 that work together to learn and store the new information of which lever to press to get water. Scientists then shut down CA1 with a drug. And built an artificial hippocampal part that could duplicate such electrical patterns between CA1 and CA3, and inserted it into the rat’s brain. With this artificial part, rats whose CA1 had been pharmacologically blocked, could still encode long-term memories. And in those rats who had normally functioning CA1, the new implant extended the length of time a memory could be held.
The next step is to test the device in monkeys, and then in humans. Of course at this early stage a breakthrough like this brings up more questions than solutions. Memory is hugely complex, based on our individual experiences and perceptions. If we have the electrical pattern for the phrase, See Spot Run, mentioned above, would this mean the same thing for you as it does for me? How would such a device work within context? As writer Gary Stix asked in the Scientific American article, “Would “See Spot Run” be misinterpreted as laundry mishap instead of a trotting dog?” Or as the science journalist John Horgan once put it, you might hear your wedding song, but I hear a stale pop tune.
We are provided with the same structural blueprint for our brains, but its circuitry is built from experience and genetics, and this is a tapestry unique to each of us. Something that many scientists feel we’ll never be able to fully crack and decode, let alone insert into it an experiential memory.
Source | Smart Planet
Mental imagery is related to our perception of the external world, according to a new study of how the brain processes images.
Joel Pearson of the University of New South Wales and colleagues asked participants to imagine a green circle with vertical lines or a red circle with horizontal lines, and rate how vivid the mental image was and how difficult it was to conjure. They then presented the subjects with a binocular rivalry display, where the left and right eyes each see a different pattern, and asked them to report which pattern their brain settled on as dominant.
The researchers found that the pattern that participants reported as being most vivid when imagined was the same as the one that dominated in binocular rivalry. They suggest that this supports the idea that internal mental images are closely related to how brains perceive the external world.
Source | Kurzweilai
Todd McDevitt at the Georgia Institute of Technology and colleagues have found that adding biomaterials such as gelatin into clumps of stem cells (called “embryoid bodies”) affected stem-cell differentiation without harming the cells.
By incorporating magnetic particles into the biomaterials, they could control the locations of the embryoid bodies and how they assemble with one another.
Compared to typical delivery methods, providing differentiation factors — retinoic acid, bone morphogenetic protein 4 (BMP4) and vascular endothelial growth factor (VEGF) — via microparticles induced changes in the gene and protein expression patterns of the aggregates.
In the future, these new methods could be used to develop manufacturing procedures for producing large quantities of stem cells for diagnostic and therapeutic applications.
Source | Kurzweilai
Theodore Berger and his team at the USC Viterbi School of Engineering’s Department of Biomedical Engineering have developed a neural prosthesis for rats that is able to restore their ability to form long-term memories after they had been pharmacologically blocked.
In a dramatic demonstration, Berger blocked the ability to rats to form long-term memories by using pharmacological agents to disrupt the neural circuitry that communicates between two subregions of the hippocampus, CA1 and CA3, which interact to create long-term memory, prior research has shown.
The rats were unable to remember which lever to pull to gain a reward, or could only remember for 5–10 seconds, when previously they could remember for a long period of time.
The researchers then developed an artificial hippocampal system that could duplicate the pattern of interaction between CA3-CA1 interactions. Long-term memory capability returned to the pharmacologically blocked rats when the team activated the electronic device programmed to duplicate the memory-encoding function.
The researchers went on to show that if a prosthetic device and its associated electrodes were implanted in animals with a normal, functioning hippocampus, the device could actually strengthen the memory being generated internally in the brain and enhance the memory capability of normal rats.
“These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes,” says the paper.
Next steps, according to Berger and Deadwyler, will be attempts to duplicate the rat results in primates (monkeys), with the aim of eventually creating prostheses that might help human victims of Alzheimer’s disease, stroke, or injury recover function.
Source | Kurzweilai
Computers that run on chips made from tiny magnets may be as energy-efficient as physics permits.
According to new calculations, if nanomagnetic computers used any less energy, they’d break the second law of thermodynamics. Such computers are still semi-theoretical, but they could someday be used in the deep oceans or even deep space, where energy is at a premium.
If nothing else, nanomagnetic laptops wouldn’t overheat.
“They’re actually maximally efficient, in the sense that they use up only the energy that is theoretically required to carry out a computation,” said electrical engineer Brian Lambson of the University of California at Berkeley. The results will be published in Physical Review Letters.
Conventional computers process information by shuttling electrons around circuits. But though electrons have miniscule mass, it takes a surprising amount of energy to move them. Even the most advanced computers use far more energy than they theoretically need.
That theoretical energy limit was set by IBM physicist Rolf Landauer, who argued in 1961 that altering a single bit of information will always produce a tiny amount of heat. No matter how the computer is built, Landauer claimed, no change can occur without an accompanying transfer of energy. Most computers devour up to a million times more energy than this “Landauer limit” every time they do a calculation.
Nanomagnetic chips are made from material similar to refrigerator magnets, etched with rows of rectangles. Each rectangle measures about 100 nanometers on a side and has magnetic poles. Information is stored in how they point: One configuration is 1, the other is 0. Because the magnets are so small, they can be packed close enough for their magnetic fields to interact. Information passes without any physical changes to the chip.
“Magnetic systems are unique in that they have no moving parts,” Lambson said. “Moving parts are really what dissipate a lot of energy in physical systems, whether it’s moving electrons or physical material.”
Nanomagnetic chip design is still in its infancy, far from optimally efficient. But to see how little energy the chips might consume, Lambson’s team estimated how magnetic fields would change during computation, then calculated the energy required to make those changes.
The results were close to Landauer’s limit. “We were surprised to see that they were almost exactly the same,” Lambson said.
Using magnets to build ultra-efficient computers is a powerful idea, said nanomagnetic logic pioneer Wolfgang Porod of the University of Notre Dame, who was not involved in the new work.
The Landauer limit, however, may not actually represent the limits of efficiency. “These arguments still are somewhat controversial,” Porod said. “The argument used to be more academic. But with devices getting smaller and smaller, these arguments are hitting closer to home.”
Source | Wired Science
NOBEL laureate Barry Marshall plans to become the first Australian to post his own full genetic code, or genome, on the internet, even though it does reveal unsettling insights.
His nearly-completed six-billion-piece code shows he is at nearly three times higher lifetime risk of macular degeneration and double for testicular cancer and for Alzheimer’s disease.
”If I develop Alzheimer’s disease, that’s bad luck, but it’s not going to worry me,” says Professor Marshall.
The power of the genome to reveal each individual’s biological strengths and weaknesses will guide diagnosis and identify effective drugs for individual patients in a revolution about to sweep world medicine, he says.
”It is not going to be long before every Australian will be carrying their genome on a smart card.
”This is going to be an enormous and unprecedented help to their health,” says the doctor, who swallowed a laboratory culture to prove that bacteria caused stomach ulcers.
It was an idea that confounded the medical orthodoxy but ultimately won him and Dr Robin Warren the Nobel prize.
At the National Press Club yesterday, Professor Marshall predicted that in a decade we would have our genome on our smart phones and be able to routinely gain access to those of prospective boyfriends or girlfriends.
People would get used to the swings and roundabouts of knowing their genetic make-up as the benefits to their health became clear and treatment got better-targeted.
He told of his wife’s concern about her own mother’s macular degeneration, which were allayed when a genome scan found she did not have her mother’s gene for the blinding condition.
Treatments of conditions like high cholesterol would continue to improve as doctors took advantage of routinely upgraded refinements of genetic influences.
”Australians currently seem too paranoid to truly embrace genomics. Yet there will soon be thousands of human genomes publicly available,” he says, pointing to the publishing of their genomes by gene map pioneer Dr Craig Venter and South African Bishop Desmond Tutu. His comments come as Australian health authorities grapple with how to authorise new drugs dependent on pre-genetic testing. He believes that the growing demand for personal genomes – already available in preliminary form for as little as $200 – will require a huge increase in experts to interpret the lengthy sequences of letters comprising the human DNA.
Professor Marshall says Australia, like the US, should legislate against discriminatory practices like higher life insurance premiums on the basis of genetic tests.
Ronald Trent, professor of medical molecular genetics at Sydney University, says that any data individuals publish that might be interpreted as having an adverse health risk could potentially be used by life insurance companies, but not health funds, to determine policies.
But Professor Trent said Australia and the US systems were not comparable given Australian measures like the Disability Discrimination Act, which prohibits employment discrimination on genetic grounds, and the availability of universal health insurance.
Source | The Age
Lasers, the key to optical communications, data storage, and a host of other modern technology, are usually made from inanimate solids, liquids, or gases. Now, a pair of scientists have developed what could be the world’s first biological laser. Built into a single cell, the laser might one day be used for light-based therapeutics, perhaps killing cancer cells deep inside the body.
Invented just over 50 years ago, the laser is essentially a light amplifier. It works by “pumping” atoms or molecules in a gas, liquid, or solid into a more energetic state, usually electrically, chemically, or with another laser. Once pumped, one of the “excited” atoms will eventually decay and emit a photon, and this photon will begin tipping the other atoms from their excited states, releasing a torrent of new photons in the process. These photons amplify their numbers further by bouncing back and forth between two mirrors, one of which is only partially silvered, which lets some of the light out in a characteristically focused beam.
Physicists Malte Gather and Seok-Hyun Yun of Harvard Medical School in Boston have now figured out how to replicate this process in a living cell. “At the beginning of our work, the motivation to look at biolasers was mostly scientific curiosity,” Gather says. “It was the time [last year] when the laser celebrated its 50th anniversary. We realized that although people had looked at many different types of materials for lasers, biological substances had not played a major role.”
The key to Gather and Yun’s biolaser is green fluorescent protein (GFP), a molecule that has proved endlessly useful to biologists since its discovery in the jellyfish Aequorea victoria in the early 1960s, partly because living cells can be so easily programmed to produce it. Gather and Yun did this with cells derived from a human kidney, adding the DNA that codes for GFP. The researchers then placed some of the cells producing GFP between two mirrors just one cell’s width apart.
To lase, the GFP in the cells needed to be pumped with another laser, one that sends pulses of blue light at a low energy of about 1 nanojoule. Normally, blue light would simply make the GFP in the cells fluoresce—that is, emit light randomly in all directions. But inside the tight cavity, the light bounced back and forth, amplifying the emission from the GFP to a coherent green beam, the researchers report online today in Nature Photonics.
Qingdong Zheng, a materials scientist at John Hopkins University in Baltimore, Maryland, suggests that such biolasers could find uses in new types of sensors or in light-based therapeutics, in which light is used, for example, to kill cancer cells by triggering drugs into action that have already been administered. “It’s a nice piece of work,” he says.
Gather and Yun are also interested in the therapeutic possibilities of their device. And although the biolaser is still in its earliest stages of development, they speculate that in the long term it might also help the backbone of optical communications shift from inanimate electronic devices to biotechnology. This, Gather says, would make it easier to develop direct human-to-machine interfaces, in which a brain’s neurons signal their operation with flashes of laser light, to be captured by an exterior device. Such an advance might enable disabled people to use computers without a mouse or keyboard, for example.
But perhaps the most intriguing aspect of the biolaser comes from its intrinsically living nature. In some types of conventional laser, the lasing medium degrades over time until it stops working properly. With biolasers, however, cells can continually make new GFP. “We might be able to make self-healing lasers,” Gather says.
Source | Science Magazine
That virtual yellow first-down line superimposed on an actual football field is one of the more visible examples of a technology that is still not well known. But augmented reality is quickly emerging from obscurity and could soon dramatically reshape how we shop, learn, play and discover what is around us.
In simple terms, augmented reality is a visual layer of information — tied to your location — that appears on top of whatever reality you’re seeing. Augmented reality (AR) apps have been increasingly popping up on smartphones and camera-equipped tablets such as the iPad 2. Versions of AR also work in conjunction with webcams, special eyewear and game consoles such as Microsoft’s Xbox 360 via Kinect or the Nintendo 3DS handheld that went on sale recently.
“Extraordinary possibilities are right around the corner,” says Microsoft computer scientist Jaron Lanier. “We’re closing in on it.”
•Pointing your phone at a famous landmark and almost instantly receiving relevant historic or current information about your surroundings.
•Fixing a paper jam in a copy machine by pointing a device at the copier and, directed by the virtual arrows that appear, pressing in sequence the right buttons and levers.
•Visualizing what you’ll look like in a wedding dress without trying it on.
Today, luminaries of the field are gathering at the ARE 2011 (Augmented Reality Event) conference kicking off in Santa Clara, Calif., to discuss AR’s future in e-commerce, mobile, real-time search and story telling, among other areas.
In one form or another, AR technology dates back at least 30 years, says Ramesh Raskar of the Massachusetts Institute of Technology’s Media Lab, where some of the pioneering work was done. Now, a confluence of ever-improving technologies — cellphone cameras, more powerful processors, graphics chips, touch-screens, compasses, GPS and location-based technologies — are helping drive AR forward. GeoVector, Layar, Metaio, Quest Visual, Shotzoom Software, Viewdle, Total Immersion and even Google Goggles are weighing in with AR-based smartphone browsers or apps.
A recent report from Juniper Research in the United Kingdom found that an increasing number of leading brands, retailers and mobile vendors are investing in mobile augmented reality applications and services. Global revenue is expected to approach $1.5 billion by 2015, up from less than $2 million in 2010. And Juniper found that the installed base of AR-capable smartphones had increased from 8 million in 2009 to more than 100 million in 2010.
Steven Feiner, a professor of computer science at Columbia University, and one of the gurus of the field, says augmented reality can exploit all the senses, including touch and hearing. For example, imagine a virtual character following you around and whispering relevant information in your ear.
Augmented reality already has real-world applications:
Games: For some consumers, their first encounter with AR is likely to be at play. The NBA’s Dallas Mavericks recently teamed with Qualcomm and Big PlayAR on a promotion that turns a ticket into an interactive basketball game when viewed through an Android-phone. The game is the first commercial application to take advantage of a mobile augmented reality platform launched recently for Android developers by Qualcomm.
Nintendo 3DS offers an archery game that also takes advantage of AR. Aim the handheld’s camera at an innocuous-looking AR card placed on a coffee table, and watch fire-breathing three-dimensional dragons appear to rise from the surface.
Gaming publisher Ogmento’s Paranormal Activity: Sanctuary is a location-based multiplayer iPhone game that lets you project ghosts and other supernatural effects onto a real world scene.
Shopping: The Swivel Virtual Dressing Room under development from FaceCake Marketing and scheduled for retail stores and perhaps your own bedroom closet, promises to let you try on virtual duds and accessories in real time. Swivel was demonstrated recently at the Demo high-tech conference. Among the scenarios CEO Linda Smith talks about: taking consumers virtually from a store floor in Atlanta to the streets of Paris to envision what they’d look like wearing the latest spring dress in front of the Eiffel Tower. A shopper might watch rain bounce off a virtual umbrella.
EBay Classifieds takes the shopping experience in a different direction. It worked through Metaio’s mobile Junaio augmented reality browser to deliver an Android and iPhone experience that lets you point a smartphone at houses along your block and see pop-ups of any items your neighbors have put up for sale.
EBay also has an AR app that lets you try on virtual sunglasses before choosing which, if any, to buy.
EBay Mobile Vice President Steve Yankovich says the goal was to make the utility of the app 80% to 90% (of the experience), and the wow or gee-whiz factor, 10% to 20%. If it were the other way around, he asks, “What is the point?”
Frank Cooper, chief consumer engagement officer for PepsiCo Americas Beverages, concurs: “The most powerful form of AR may not be the flashy examples,” but rather “the ones that serve basic needs of people: information, entertainment, social connections.”
Still, Pepsi has shown off flash. In one early-stage example, the company worked with Rihanna on an augmented reality promotion in which you could hold a webcam in front of a code on a bag of Doritos and project an image of the singer performing a new track. Might there be similar efforts? “That’s one area we’re exploring aggressively,” Cooper says.
Still a learning curve
Still, for all of AR’s promise, its future success is by no means a slam dunk. Some of the early AR apps on smartphones are clumsy to use and unnatural. Eyewear for consumer use hasn’t been perfected. “The optics and display trickery to get the thing right — that’s not easy,” says Microsoft’s Lanier.
“For better or worse, a lot of what has been perceived as mobile AR is gimmicky,” says Jay Wright, director of business development at Qualcomm. “The challenge with AR is to find uses that solve a real problem and enable something fundamentally new, useful or uniquely entertaining.”
Bruno Uzzan, CEO of Total Immersion, the company whose technology is behind the eBay Fashion sunglasses app, says AR stops being a gimmick “when my client says I’m making more sales with AR than without it.” One such client is Hallmark Cards, which produces AR cards that come alive with animations when you hold them up to a webcam.
AR adoption won’t come easily. “In the first case, the hurdle is education — not just for consumers but for brands, developers and services providers,” says Windsor Holden, a U.K.-based analyst for Juniper Research. “There is still a pretty widespread lack of awareness as to what AR is.”
Forrester Research analyst Thomas Husson also says mobile AR is not yet delivering on its promise. But “in the years to come, this will be disruptive technology that changes the way consumers interact with their environment.”
The disruptions are likely to evolve in many different ways. At the MIT Media Lab, Ramesh is working on 3-D motion-tracking Second Skin technology, in which tiny sensors and a microcontroller are bound to the body through a lightweight wearable suit and used to augment and teach motor skills. Say you’re learning to dance or to juggle. The system can track your movement and provide tactile feedback that corrects your position as you go.
“Think of Second Skin as your real-time assistant,” Ramesh says. “I call it an experience overlay. I’m not playing a TV game where I’m learning how to juggle. I’m doing real juggling.”
Ramesh says the technology could cost as little as $1,000 and be in the market within a year. It could have broad reach into health and education; for example, teaching someone to perform surgery.
At Columbia, one of Feiner’s areas of focus is maintenance and repair. “I’d like to be within the task itself. If you had AR with proper (virtual) documentation, you could look at a machine, and it would show you first do this, then do that, with a little bit of extra highlighting to walk you through.”
Gazing further out, Microsoft’s Lanier says he’d like to see the road he’s driving on augmented with signs of where there’ve been accidents and traffic jams. He’d love to be able to walk into a neighborhood and see what it was like back in time —San Francisco during the Gold Rush, say.
Lanier also expects, within 15 years or so, a new futuristic outdoor national sport to materialize with virtual game elements that don’t necessarily resemble any of our current pastimes.
And he predicts way out in the future that you’ll be able to experience a physical product you might want to buy, AR versions of a chair, for example. When you find one you like, you’ll make a payment, a machine will chug, materials will somehow be piped in, and the new chair will be in your house.
For now, it seems like a pipe dream, fodder for a Jetsonian age. But consumer product strategists are already paying attention to AR.
As Cooper of PepsiCo warns his peers: “Ignore AR at your own peril.”
Source | USA Today
Until now, mobile augmented reality has been all about smartphones, with the creation of AR content restricted to developers with specific skills. Announcements today from startups Metaio and Layar show how both companies are keen to move beyond this.
Metaio thinks that tablets will become increasingly important devices for AR, describing them as “the perfect enabler for augmented reality” as it published a video showcasing its Junaio AR technology running on slate devices.
Metaio’s bullishness is about more than just the iPad: the company thinks the new wave of tablets running Google’s Android 3.0 operating system – starting with the Motorola Xoom – will create new opportunities for innovative AR applications.
“The extreme light weight, the multiple sensors such as compass or GPS, the large screen and perfectly positioned twin cameras of the new tablets make them fascinating machines,” says Metaio in what’s a cross between a press release and a manifesto.
It also cites dual-core processors as a key factor enabling tablets to be used for AR applications including instructional guides; product information; e-commerce; entertainment and gaming.
“If you want to display for example rich media content triggered by printed material like newspapers or magazines, you need to recognize the object, process the image and render the content into the video stream tightly connected to the original image. By capturing the object on one core and by handling tracking (recognition and initialisation) on the other core, performance and user experience will be so much better.”
Metaio’s view is that AR is “more than a marketing gimmick or hype, it’s actually an interface revolution”. However, there are currently relatively few companies able to take part in this revolution, since creating AR content remains the preserve of developers willing and able to get to grips with the tools.
That’s something Metaio’s rival Layar is hoping to change with its own announcement today of an initiative called Layar Connect. It’s all tools to help more people create content and services around Layar’s AR platform, with the help of external companies who build these tools.
“We’re focused on the democratisation of augmented reality and want to make it easier to create and publish AR content for all,” says Maarten Lens-FitzGerald, co-founder and general manager of Layar, in a statement.
“With Layar Connect, we are the first in the industry to move management and publication of AR content to third parties. This creates opportunities for Layar partners to add increased value to their business – a big step in the professionalisation of the AR industry.”
Augmented reality itself is hardly a young technology by web standards, but the buzz around mobile augmented reality is a more recent phenomenon, thanks to the growing popularity of smartphones (and yes, now tablets) with the grunt to handle AR – not to mention the faster connectivity and GPS sensors.
Companies including BuildAR, Poistr, Visar and Poiz – the AR space is thoroughly Web 2.0 in its startup naming conventions – are already using Layar Connect, with more to come.
Metaio’s point about augmented reality being a new interface with many uses rather than a specific type of app is key, though. Layar’s decision to open up the creation of AR content to a wider audience can only reinforce that.
Metaio is building its own network of developers and brands using its own technology. The competition between the two, along with Qualcomm, Google and other companies training their sights on augmented reality, should fuel a host of innovative ideas in the months to come.
Source | The Guardian
The past few weeks have seen two developments that show that we’re on the verge of home 3-D printing really breaking out into the mainstream. The first is this: researchers at the Vienna University of Technology have unveiled the smallest 3-D Printer produced to date. This prototype 3-D printer is capable of printing in a synthetic resin at a high resolution — the individual layers are just a twentieth of a millimeter thick. The prototype model is the size of a milk carton and cost about 1200 Euros (about $1750) – which is very inexpensive for a machine of this type.
The second is Autodesk‘s release of its new 3-D modeling software, Autodesk 123D. The application is free for Windows machines, and is specifically geared towards the design of 3-D projects “for makers.” If VUT’s prototype 3-D printer, or something like it, ends up taking off, then Autodesk is rather nicely positioned to become popular software for that type of printer. Autodesk is also in the process of working out collaborative deals with other companies, such as Ponoko, so that the software can be used to create personalized products that can then be shipped.
3-D Printing has a very strong potential to revolutionize manufacturing, and cheap printing that’s available in every home opens the door to some rather staggering possibilities, especially for artists and crafters. The next few years are going to be pretty exciting.
Source | Forbes
Aldebaran Robotics, which produces the Nao Humanoid Robot, has announced that it has raised another $13 million in venture capital. The bulk of funds are being provided by Intel Capital, the hardware giant’s investment arm.
From the press release:
The new funds will play a key role in allowing the business to develop its product offering into additional vertical sectors such as health and social care. The investment will also help Aldebaran streamline its production operations and increase its research and development capabilities.
“Working with Intel Capital is a step we believe will propel the business and help the technology we have developed reach its full potential. Our products have the flexibility to provide solutions across a range of applications and this investment will play a huge role in helping drive manufacturing efficiencies and further our research capabilities to help the business’ expansion into new markets. Intel products are ideally suited for the processing demands required by robotics. This investment from Intel Capital enables Aldebaran to become a key player in this nascent industry.” , Bruno Maisonnier, founder and CEO of Aldebaran Robotics said; “It is primarily for us, a fantastic mark of recognition and trust from a group that has always favored innovation and has risen in recent years at the top of the global computer market”.
This is a big step for Aldebaran and its Nao line of humanoid robots. If it pays off for Intel, I’d expect to see a lot more investment in the area of personal robotics going forward. In particular, it’ll be interesting to see if Intel starts going into the business itself – at least on the processor end.
Source | Forbes
Google, Microsoft, and Yahoo have teamed up to encourage Web page operators to make the meaning of their pages understandable to search engines.
The move may finally encourage widespread use of technology that makes online information as comprehensible to computers as it is to humans. If the effort works, the result will be not only better search results, but also a wave of other intelligent apps and services able to understand online information almost as well as we do.
The three big Web companies launched the initiative, known as Schema.org, last week. It defines an interconnected vocabulary of terms that can be added to the HTML markup of a Web page to communicate the meaning of concepts on the page. A location referred to in text could be defined as a courthouse, which Schema.org understands as being a specific type of government building. People and events can also be defined, as can attributes like distance, mass, or duration. This data will allow search engines to better understand how useful a page may be for a given search query—for example, by making it clear that a page is about the headquarters of the U.S. Department of Defense, not five-sided regular shapes.
The move represents a major advance in a campaign initiated in 2001 by Tim Berners-Lee, the inventor of the Web, to enable software to access the meaning of online content—a vision known as the “semantic Web.” Although the technology to do so exists, progress has been slow because there have been few reasons for Web page operators to add the extra markup.
Schema.org may change that, saysDennis McCleod, who works on semantic Web technology at the University of Southern California. By tagging information, Web page owners could improve the position of their site in search results—an important source of traffic. “This will motivate people to actually add semantic data to their pages,” says McCleod. “It’s always hard to predict what will be adopted, but generally, unless there’s something in it for people, they won’t do it. Google, Microsoft, and Yahoo have given people a strong reason.”
The Schema.org approach is modeled on one of the more straightforward methods of describing the meaning of a Web page’s contents. “The trouble with many of these techniques is, they are really hard to use,” says McCleod. “One of the encouraging things about Schema.org is that they are pursuing this at a level that is quite usable, so it is much easier to mark up your website.”
Source | Technology Review
Do you know the number of miles you’ve driven over the last five years? Every meal you’ve eaten? The number of browser tabs you’ve had open during the day compared with the amount of sleep you had that night? That’s the kind of data collected by the new generation of self-trackers who descended on the Computer History Museum, in Mountain View, California, for the first annual Quantified Self conference over Memorial Day weekend.
About 400 hackers, programmers, entrepreneurs and health professionals came from across the globe, united by a desire to collect as much data as possible about themselves in order to make informed decisions regarding health, productivity and happiness. (One participant had logged X-rated information on the number of his sexual partners and duration of sexual activities. He went to a session on data visualization looking for an interesting way to illustrate that data.)
The self-tracking movement, which has sprung to life over just the last couple of years, is enabled in large part by both wireless sensing devices and smart phones. Many people already employ smart phone apps to track food intake and fitness, but a new generation of apps also tracks mood, meditation, migraines and other factors.
Beyond the smart phone, low power wireless transmitters are transforming existing objects, such as scales and pedometers, making tracking both effortless and easy to share. A Wi-fi enabled scale automatically tracks your weight and will even tweet the numbers—for those lucky few who really want to share.
Several commercial wearable monitors, such as fitbit and Bodymedia, employ accelerometers to track the wearer’s movement, pairing that with specialized algorithms to calculate calories burned. Data is automatically uploaded to the internet, allowing users to track their progress and compete against each other for the most steps or highest activity levels.
One of my favorite projects was a proposal from Kyle Machulis, robotics engineer and self-described hacker, to figure out what makes programmers write bad code. By tracking programmers as they code, monitoring their computers, chairs, keyboards and perhaps the programmer herself via computer cameras, “then you could look at what was happening when they wrote a bug and see if that happens with other bugs,” he says. Or you could chart the parts of a program that appear to be the least user-friendly, perhaps when users fidget, and see if there was some kind of predictive behavior on the programmer’s part.
While the self-tracking trend is still largely limited to early adopters—technophiles, elite athletes and patients monitoring chronic conditions—the diversity of attendees at the conference highlights just how fast it’s moving into the mainstream.
In one breakout session, a group earnestly discussed the best approaches to self-experimentation and the results of some rather odd experiments: standing on one leg for eight minutes a day leads to better sleep; and eating butter for a better performance on a test of cognitive function.
On the other side of the museum, Ben Rubin, co-founder of Zeo, a start-up that sells a consumer sleep-monitoring device, led a discussion on the best business models for the field. And last but not least, healthcare providers and entrepreneurs discussed the best ways to try to bring these tools into medicine.
While the sessions focused on medicine were the smallest at the meeting, there are signs that self-tracking is catching the interest of mainstream healthcare. Humana, a major insurer, had several attendees, as did the Robert Wood Johnson Foundation, the largest healthcare centered non-profit in the country. The latter gave a grant to help the Quantified Self organization compile an online guide to self-tracking tools, with the aim of helping the movement spread.
As of Thursday, the guide listed 432 tools. Here’s a smattering;
Equanimity a mediation timer and tracker.
Quantter, a web site where you can track your daily activities using Twitter.
MoodScope, web based application for measuring, tracking and sharing your mood.
Withings Wifi Bodyscale, a digital wireless body fat monitor and scale.
Philips DirectLife, a set of activity programs aimed at increasing fitness.
DailyFeats; DailyFeats is a web app designed to reward users for good “feats” by awarding points, badges, and real world savings.
Source | Technology Review