Archive for August, 2011

Brain-Computer Interface for Disabled People to Control Second Life With Thought Available Commercially Next Year

Friday, August 12th, 2011

This is an awesome use of a brain-computer interface developed for disabled people to navigate in the 3D virtual world of Second Life, using a simple interface controlled by the user’s thought:

Developed by an Austrian medical engineering firm called G.Tec, the prototype in the video above was released last year, but since New Scientist wrote about the project recently, and since it’s one of the few real world applications of Second Life that’s already showing tangible, scalable, incredibly important social results, I checked with the company for an update:

“The technology is already on the market for spelling,” G.Tec’s Christoph Guger tells me, pointing to a company called Intendix. “The SL control will be on the market in about one year.” I imagine there are many disabled people in SL right now who would benefit from this, and many more not in SL who could, once it’s on the market. (A Japanese academic created a similar brain-to-SL interface in 2007, but to my knowledge, there are no commercial plans for it as yet.)

Guger shared some insights on how the technology works, and the disabled volunteers who helped them develop it:

G. Tec test volunteers and interface, courtesy Christoph Guger

Above is a pic of the main G. Tec interface with all the basic SL commands. There are other UIs for chatting (with 55 commands) and searching (with 40 commands.)

Not surprisingly, Guger tells me their disabled volunteers enjoyed flying in Second Life most. “It is of course slower than with the keyboard/mouse,” Guger allows, “but the big advantage is that you appear as a normal user in SL, even if you are paralyzed.”

This brain-to-SL interface literally gives housebound disabled people a world to explore, and a means to meet and interact with as many people there, as live in San Francisco; that in itself is an absolute good. But beyond that, Guger sees other medical applications: “First of all you can use it for monitoring, if the patient is still engaged and as a tool to measure his performance. Beside that, it gives access to many other people, which would not be possible otherwise. New games are also developed for ADHD children for example.”

Source | New World Notes







After 30 years, IBM says PC going way of vacuum tube and typewriter

Friday, August 12th, 2011

Thirty years ago, IBM created the first personal computer running Microsoft’s MS-DOS. Today, IBM and Microsoft seem to have very different views on the future of the PC.

IBM CTO Mark Dean of the company’s Middle East and Africa division, one of a dozen IBM engineers who designed that first machine unveiled Aug. 12, 1981, says PCs are “going the way of the vacuum tube, typewriter, vinyl records, CRT and incandescent light bulbs.”

IN PICTURES: Evolution of the PC

IBM, of course, sold its PC division to Lenovo in 2005. Dean, in a blog post, writes that “I, personally, have moved beyond the PC as well. My primary computer now is a tablet. When I helped design the PC, I didn’t think I’d live long enough to witness its decline. But, while PCs will continue to be much-used devices, they’re no longer at the leading edge of computing.”

Dean’s remarks continue a debate over whether we are now in a so-called “post-PC” era, in whichsmartphones and tablets are replacing desktops and laptops. Not surprisingly, Microsoft — seller of 400 million Windows 7 licenses — isn’t a fan of that term.

“I prefer to think of it as the PC-plus era,” Microsoft corporate communications VP Frank Shaw writes in a blog post of his own.

In Microsoft’s vision, it’s the PC plus Bing, Windows Live, Windows phones, Office 365, Xbox, Skype and more.

A VISUAL HISTORY: Windows after 25 years

“Our software lights up Windows PCs, Windows Phones and Xbox-connected entertainment systems, and a whole raft of other devices with embedded processors from gasoline pumps to ATMs to the latest soda vending machines, to name just a few,” Shaw writes. “In some cases we build our own hardware (Xbox, Kinect), while in most other cases we work with hardware partners on PCs, phones and other devices to ensure a great end-to-end experience that optimizes the combination of hardware and software.”

Shaw notes that the Apple II, Commodore PET and other devices preceded the first IBM 5150 PC running MS-DOS but says it was the IBM and Microsoft partnership that “was a defining moment for our industry” and fulfilled “the dream of a PC on every desk and in every home.”

The first IBM PC even predates the Macintosh and Windows, which launched in 1984 and 1985, respectively. Shaw says he still owns his first computer, the IBM Personal Portable booting MS-DOS version 5.1.

Although Microsoft’s role in the daily lives of personal computer users could be diminished by the rise of iPhones, Android phones and iPads, IBM’s Dean says it’s not simply a new type of device that is replacing the PC as “the center of computing.”

“PCs are being replaced at the center of computing not by another type of device — though there’s plenty of excitement about smartphones and tablets — but by new ideas about the role that computing can play in progress,” Dean writes. “These days, it’s becoming clear that innovation flourishes best not on devices but in the social spaces between them, where people and ideas meet and interact. It is there that computing can have the most powerful impact on economy, society and people’s lives.”

While that sounds pretty vague, Dean notes that IBM has boosted its profit margins since selling off its PC division with a strategy of exiting commodity businesses and “expanding in higher-value markets.” One example: IBM’s Watson, newly crowned Jeopardy champion.

“We conduct fundamental scientific research, design some of the world’s most advanced chips and computers, provide software that companies and governments run on, and offer business consulting, IT services and solutions that enable our clients to transform themselves continuously, just like we do,” Dean writes.

For all the debate over whether this is a “post-PC” era, it’s clear more people today own Windows computers and Macs than smartphones and tablets, and our new mobile devices are complementing desktops and laptops rather than replacing them.

It’s hard to beat the convenience of an easy-to-use, Internet-connected device in one’s pocket, but many tasks are cumbersome without a full, physical keyboard. Even social media, which seems as “post-PC” as it gets upon first glance, requires a lot of typing.

Some people envision a future where a smartphone is the hub of all your computing needs, and simply hooks into a dock for those rare times you want a bigger screen, mouse and keyboard. Others talk about a future where any surface, whether a wall or table, is transformed into a touch-screen computer with a snap of one’s fingers.

For now, though, most people making these proclamations are typing their blog posts on PCs.

Source | Kurzweil AI

Scientists copy the ways viruses deliver genes

Friday, August 12th, 2011

National Physical Laboratory (NPL) scientists have mimicked the ways viruses infect human cells and deliver their genetic material, hoping to apply the approach to gene therapy to correct defective genes such as those that cause cancer.

The researchers used the GeT (gene transporter) model peptide sequence to transfer a synthetic gene encoding for a green fluorescent protein — a protein whose fluorescence in cells can be seen and monitored using fluorescence microscopy. GeT wraps around genes, transports them through cell membranes, and helps their escape from intracellular degradation traps. The process mimics the mechanisms viruses use to infect human cells.

GeT was designed to undergo differential membrane-induced folding — a process whereby the peptide changes its structure in response to only one type of membranes. This enables the peptide, and viruses, to carry genes into the cell. GeT is antibacterial and capable of gene transfer even in bacteria-challenged environments.

The gene transporter design can serve as a potential template for non-viral delivery systems and specialist treatments of genetic disorders, the researchers said.

Source | Kurzweil AI

Making biofuels 10 times faster

Friday, August 12th, 2011

Engineering researchers at Rice University have developed a new method for rapidly converting simple glucose and mineral salts into biofuels and petrochemical substitutes.

The team has reversed the beta oxidation cycle to engineer bacteria that produce the biofuel butanol about 10 times faster than any previously reported organism.

The team reversed the beta oxidation cycle by selectively manipulating about a dozen genes in the bacteriaEscherichia coli. They also showed that selective manipulations of particular genes could be used to produce fatty acids of particular lengths, including long-chain molecules like stearic acid and palmitic acid, which have chains of more than a dozen carbon atoms.

This process can make many kinds of specialized molecules for many different markets — using almost any organism (algae or yeast, for example), the researchers said.

Source | Kurzweil AI

Genetically engineered spider silk improve gene therapy

Friday, August 12th, 2011

Genetically engineered spider silk could help overcome a major barrier to the use of gene therapy in everyday medicine — the lack of safe and efficient carriers or “vectors,” Tufts University scientists have found.

Human breast cancer cells growing in lab mice. The fluorescent signals indicate that a gene has successfully reached its target

The lack of good gene delivery systems is a main reason why there are no FDA-approved gene therapies, despite almost 1,500 clinical trials since 1989.

They modified spider silk proteins so that the proteins were able to attach to cancer cells, and they used mice containing human breast-cancer cells. The spider-silk proteins attached to the cancer cells and transported the DNA material into the cells — without harming the mice.

To provide a visual signal that the gene reached its intended target, they also engineered the spider silk to contain a gene that codes for the protein that makes fireflies glow.

The results suggest that the genetically engineered spider-silk proteins represent “a versatile and useful new platform polymer for nonviral gene delivery,” the researchers said.

Source | Kurzweil AI

How Computational Complexity Will Revolutionize Philosophy

Wednesday, August 10th, 2011

Since the 1930s, the theory of computation has profoundly influenced philosophical thinking about topics such as the theory of the mind, the nature of mathematical knowledge and the prospect of machine intelligence. In fact, it’s hard to think of an idea that has had a bigger impact on philosophy.

And yet there is an even bigger philosophical revolution waiting in the wings. The theory of computing is a philosophical minnow compared to the potential of another theory that is currently dominating thinking about computation.

At least, this is the view of Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology. Today, he puts forward a persuasive argument that computational complexity theory will transform philosophical thinking about a range of topics such as the nature of mathematical knowledge, the foundations of quantum mechanics and the problem of artificial intelligence.

Computational complexity theory is concerned with the question of how the resources needed to solve a problem scale with some measure of the problem size, call it n. There are essentially two answers. Either the problem scales reasonably slowly, like n, n^2 or some other polynomial function of n. Or it scales unreasonably quickly, like 2^n, 10000^n or some other exponential function of n.

So while the theory of computing can tell us whether something is computable or not, computational complexity theory tells us whether it can be achieved in a few seconds or whether it’ll take longer than the lifetime of the Universe.

That’s hugely significant. As Aaronson puts it: “Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number.”

He goes on to say that it’s easy to imagine that once we know whether something is computable or not, the problem of how long it takes is merely one of engineering rather than philosophy. But he then goes on to show how the ideas behind computational complexity can extend philosophical thinking in many areas.

Take the problem of artificial intelligence and the question of whether computers can ever think like humans. Roger Penrose famously argues that they can’t in his book The Emperor’s New Mind. He says that whatever a computer can do using fixed formal rules, it will never be able to ‘see’ the consistency of its own rules. Humans, on the other hand, can see this consistency.

One way to measure the difference between a human and computer is with a Turing test. The idea is that if we cannot tell the difference between the responses given by a computer and a human, then there is no measurable difference.

But imagine a computer that records all conversations it hears between humans. Over time, this computer will build up a considerable database that it can use to make conversation. If it is asked a question, it looks up the question in its database and reproduces the answer given by a real human.

In this way a computer with a big enough look up table can always have a conversation that is essentially indistinguishable from one that humans would have

“So if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory,” says Aaronson.

Instead, a more fruitful way forward is to think about the computational complexity of the problem. He points out that while the database (or look up table) approach “works,” it requires computational resources that grow exponentially with the length of the conversation.

Aaronson points out that this leads to a powerful new way to think about the problem of AI. He says that Penrose could say that even though the look up table approach is possible in principle, it is effectively impractical because of the huge computational resources it requires.

By this argument, the difference between humans and machines is essentially one of computational complexity.

That’s an interesting new line of thought and just one of many that Aaronson explores in detail in this essay.

Of course, he acknowledges the limitations of computational complexity theory. Many of the fundamental tenets of the theory, such as P ≠ NP, are unproven; and many of the ideas only apply to serial, deterministic Turing machines, rather than the messier kind of computing that occurs in nature.

But he says these criticisms do not allow philosophers (or anybody else) to arbitrarily dismiss the arguments of complexity theory. Indeed, many of these criticisms raise interesting philosophical questions in themselves.

Computational complexity theory is a relatively new discipline which builds on advances made in the 70s, 80s and 90s. And that’s why it’s biggest impacts are yet to come.

Aaronson points us in the direction of some of them in an essay that is thought provoking, entertaining and highly readable. If you have an hour or two to spare, it’s worth a read.

Source | Technology Review

Hybrid solar system makes rooftop hydrogen

Wednesday, August 10th, 2011

Duke University engineer Nico Hotz has proposed a hybrid solar system in which sunlight heats a combination of water and methanol in a maze of tubes on a rooftop to produce hydrogen.

The device is a series of copper tubes coated with a thin layer of aluminum and aluminum oxide and partly filled with catalytic nanoparticles. A combination of water and methanol flows through the tubes, which are sealed in a vacuum.

Once the evaporated liquid achieves higher temperatures, tiny amounts of a catalyst are added, which produces hydrogen. This combination of high temperature and added catalysts produces hydrogen very efficiently, Hotz said. The resulting hydrogen can then be immediately directed to a fuel cell to provide electricity to a building during the day, or compressed and stored in a tank to provide power later.

After two catalytic reactions, the system produced hydrogen much more efficiently than current technology without significant impurities, Hotz said. The resulting hydrogen can be stored and used on demand in fuel cells.

“This set-up allows up to 95 percent of the sunlight to be absorbed with very little being lost as heat to the surroundings,” he said. “This is crucial because it permits us to achieve temperatures of well over 200 degrees Celsius within the tubes. By comparison, a standard solar collector can only heat water between 60 and 70 degrees Celsius.”

Holtz performed a cost analysis, comparing a standard photovoltaic cell, a photocatalytic system, and the hybrid solar-methanol system.  He found that the hybrid system is the least expensive solution, with a total installation cost of $7,900 if designed to fulfill the requirements in summer.

Source | Kurzweil AI

First direct biological evidence found for genetic contribution to intelligence

Wednesday, August 10th, 2011

Scientists at The University of Edinburgh, U.K., have found.the first direct biological evidence for a genetic contribution to people’s intelligence.

The team studied two types of intelligence in more than 3,500 people from Scotland, England and Norway. They found that 40 to 50 percent of people’s differences in knowledge and problem solving skills could be traced to their genes. The study examined more than half a million genetic markers on every person’s DNA.

Previous studies on twins and adopted people suggested that there is a substantial genetic contribution to thinking skills. However, the new study is the first to test people’s DNA for genetic variations linked to intelligence.

Technical details of the study

The researchers conducted a genome-wide association study looking at over 500,000 common single nucleotide polymorphisms (SNPs), which are DNA sequence variations that occur when a single  nucleotide (A,T,C,or G) in the genome sequence is altered. They correlated genetic variation of participants’ performance on two types of general intelligence: knowledge and problem-solving skills.

The researchers found that up to half of individual differences in intelligence are due to genetic variants in linkage disequilibrium with SNPs. (Individuals often inherit rather long haplotypes (chunks) of DNA from one parent or the other, and some haplotypes themselves may also be inherited as a group. This is called linkage disequilibrium.)

The researchers found that a large proportion of the heritability estimate of intelligence in adulthood can be traced to genetic variants linked with common SNPs, confirming that at least 40–50% of individual differences in human intelligence are due to genetic variation.

The findings were made possible using a new type of analysis invented by Professor Peter Visscher and colleagues in the Queensland Institute of Medical Research, Brisbane.

Source | Kurzweil AI

Wearable cameras allow for motion capture anywhere

Wednesday, August 10th, 2011

A wearable camera system makes it possible for motion capture to occur almost anywhere — in natural environments, over large areas, and outdoors, scientists at Disney Research, Pittsburgh (DRP), and Carnegie Mellon University (CMU) have shown.

The camera system reconstructs the relative and global motions of an actor, using a process called structure from motion (SfM) to estimate the pose of the cameras on the person.

Motion capture can occur almost anywhere

SfM is also used to estimate rough position and orientation of limbs as the actor moves through an environment and to collect sparse 3-D information about the environment that can provide context for the captured motion. This serves as an initial guess for a refinement step that optimizes the configuration of the body and its location in the environment, resulting in the final motion capture result.

The researchers used Velcro to mount 20 lightweight cameras on the limbs and trunk of each subject. Each camera was calibrated with respect to a reference structure. Each person then performed a range-of-motion exercise that allowed the system to automatically build a digital skeleton and estimate positions of cameras with respect to that skeleton.

The quality of motion capture from body-mounted cameras does not yet match the fidelity of traditional motion capture, but will improve as the resolution of small video cameras continues to improve, the researchers said.

Source | Kurzweil AI

NASA research shows DNA building blocks can be made in space

Tuesday, August 9th, 2011

Scientists at  GNASA’s Goddard Space Flight Center have foundtrace amounts of three molecules related to DNA nucleobases adenine and guanine in samples of 12 carbon-rich meteorites, nine of which were recovered from Antarctica.

These nucleobase-related molecules, called nucleobase analogs, provide the first evidence that the compounds in the meteorites came from space and not terrestrial contamination.

The team analyzed an eight-kilogram (21.4-pound) sample of ice from Antarctica, where most of the meteorites in the study were found. The amounts of nucleobases found in the ice were much lower than in the meteorites.

NASA-funded researchers have found more evidence meteorites can carry DNA components created in space

More significantly, none of the nucleobase analogs were detected in the ice sample. The team also analyzed a soil sample collected near one of the non-Antarctic meteorite’s fall site. As with the ice sample, the soil sample had none of the nucleobase analog molecules present in the meteorite.

Source | Kurzweil AI

Tuesday, August 9th, 2011
 

Millenniata's M-Disc is made of a stone-like substance that the company claims does not degrade over time.

Computerworld - Start-up Millenniata and Hitachi-LG Data Storage plan to soon release a new optical disc and read/write player that will store movies, photos or any other data forever. The data can be accessed using any current DVD or Blu-ray player.

Millenniata calls the product the M-Disc, and the company claims you can dip it in liquid nitrogen and then boiling water without harming it. It also has a U.S. Department of Defense (DoD) study backing up the resiliency of its product compared to other leading optical disc competitors.

Millenniata CEO Scott Shumway would not disclose what material is used to produce the optical discs, referring to it only as a “natural” substance that is “stone-like.”

Like DVDs and Blu-ray discs, the M-Disc platters are made up of multiple layers of material. But unlike the former, there is no reflective or die layer. Instead, during the recording process a laser “etches” pits onto the substrate material

“Once the mark is made, it’s permanent,” Shumway said. “It can be read on any machine that can read a DVD. And it’s backward compatible, so it doesn’t require a special machine to read it – just a special machine to write it.”

While Millenniata has partnered with Hitachi-LG Data Storage for the initial launch of an M-Disc read-write player in early October, Shumway said any DVD player maker will be able to produce M-Disc machines by simply upgrading their product’s firmware.

Millenniata said it has also proven it can produce Blu-ray format discs with its technology – a product it plans to release in future iterations. For now, the platters store the same amount of data as a DVD: 4.7GB. However, the discs write at only 4x or 5.28MB/sec, half the speed of today’s DVD players.

“We feel if we can move to the 8X, that’d be great, but we can live with the four for now,” Shumway said, adding that his engineers are working on upping the speed of recording.

Millenniata is also targeting the long-term data archive market, saying archivists will no longer have to worry about controlling the temperature or humidity of a storage room. “Data rot happens with any type of disc you have. Right now, the most permanent technology out there for storing information is a paper and pencil — until now,” Shumway said.

In 2009, the Defense Department’s Naval Air Warfare Weapon’s Division facility at China Lake, Calif. was interested in digitizing and permanently storing information. So it tested Millenniata’s M-Disc against five other optical disc vendors: Delkin Devices, Mitsubishi, JVC, Verbatim and MAM-A.

“None of the Millenniata media suffered any data degradation at all. Every other brand tested showed large increases in data errors after the stress period. Many of the discs were so damaged that they could not be recognized as DVDs by the disc analyzer,” the department’s report states.

Recordable optical media such as CDs, DVDs and Blu-ray discs are made of layers of polycarbonate glued together. One layer of the disk contains a reflective material and a layer just above it incorporates an organic transparent dye. During recording, a laser hits the die layer and burns it, changing the dye from transparent to opaque creating bits of data. A low power laser then can read those bits by either passing through the transparent dye layer to the reflective layer or being absorbed by the pits.

Over long periods of time, DVDs are subject to de-lamination problems where the layers of polycarbonate separate, leading to oxidation and read problems. The dye layer, because its organic, can also break down over time, a process hastened by high temperatures and humidity.

While the DVD industry claims DVDs should last from 50 to 100 years,according to the National Institute of Standards and Technology (NIST), DVDs can break down in “several years” in normal environments. Additionally, NIST suggests DVDs should be stored in spaces where relative humidity is between 20% and 50%, and where temperatures do not drop below 68 degrees Fahrenheit.

Gene Ruth, a research director at Gartner, said generally he’s not heard of a problem with DVD longevity. And, while he admits that a DVD on a car dashboard could be in trouble, the medium has generally had a good track record.

But Ruth said he can see a market in long-term archiving for a product such as the M-Disc because some industries, such as aircraft engineering, healthcare and financial services, store data for a lifetime and beyond.

Millenniata partnered with Hitachi-LG Data Storage to provide M-Ready technology in most of its DVD and Blu-ray drives. Shumway said the products will begin shipping next month and should be in stores in the beginning of October.

“We felt it was important that we first produce this with a major drive manufacturer, someone that already had models and firmware out there,” Shumway said.

Hitachi-LG Data Storage's M-Disc read-write player.

Unlike DVDs, which come in 10-, 25-, 50- or 100-disc packs, M-Discs will be available one at a time, or in groups of two or three for just under $3 per disc. Millenniata is also courting system manufacturers in the corporate archive world.

“We’re working with some very large channels as we train their distribution networks to launch this,” he said. “At the same time, we’re launching this at Fry’s [Electronics] so consumers can see it and be introduced to this technology.”

Source | Computerworld

How the brain remembers what happens and when

Tuesday, August 9th, 2011

Neuroscientists at New York University have identified the parts of the brain we use to remember the timing of events within an episode.

The researchers ran animal subjects through a temporal-order memory task in which a sequence of two visual objects were presented and the subjects were required to retrieve that same sequence after a delay. To perform the task correctly, the subjects needed to remember both the individual visual items (“what”) and their temporal order (“when”). During the experiment, the researchers monitored the activity of individual brain cells in the subjects’ medial temporal lobe (MTL).

Their results showed that two main areas of the MTL are involved in integrating “what” and “when”: the hippocampus and the perirhinal cortex. The hippocampus, which is known to have an important role in a variety of memory tasks, provides an incremental timing signal between key events, giving information about the passage of time from the last event as well as the estimated time toward the next event. The perirhinal cortex appeared to integrate information about what and when by signaling whether a particular item was shown first or second in the series.

Their findings provide insight into the specific patterns of brain activity that enable us to remember both the key events that make up our lives and the specific order in which they happened, the researchers said.

Source | Kurzweil AI

Human skin cells converted directly into functional neurons

Tuesday, August 9th, 2011

Researchers at Columbia University Medical Center have directly converted human skin cells into functional forebrain neurons, without the need for stem cells of any kind.

Schematic of the conversion from adult skin fibroblasts to human-induced neuronal cells. Top panels show phase contrast images of human skin fibroblast (left) or hiN cell (right) cultures.

The researchers used a combination of transcription regulators, plus several neuronal support factors, to convert human skin cells into forebrain neurons. This bypassed the need for induced pluripotent stem (iPS) cells. The induced neurons appear to be the same as ordinary neurons, judging from electrophysiological testing and gene expression profiling. The researchers also showed that the neurons are able to send and receive signals in laboratory culture and when transplanted into the central nervous system of mice.

The researchers compared neurons made from skin cells of healthy individuals with neurons made from patients with early-onset Alzheimer’s disease. The latter cells exhibited altered processing and localization of amyloid precursor protein (APP) and increased concentration of amyloid beta, a component of APP (Alzheimer’s is thought to develop when abnormal amounts of amyloid beta accumulate in the brain, eventually killing neurons). APP was found to collect in the cells’ endosomes, cellular compartments that sort molecules for degradation or recycling. These findings suggest that this form of Alzheimer’s is caused, at least in part, by abnormal endosomal function, the researchers said.

The findings offer a new and potentially more direct way to produce replacement cell therapies for Alzheimer’s and other neurodegenerative diseases.

Source | Kurzweil AI

Beyond Cell Phone Wallets, Biometrics Promise Truly Wallet-Free Future

Tuesday, August 9th, 2011

Fujitsu's PalmSecure system can be used for security — and identification

Ever since Google announced that its Android phones would be equipped with a “digital wallet”that allows users to pay for things simply by touching their phone to a pad, interest in our wallet-free future has taken off. Long in use in Asia and especially Japan, the enabling technology, Near Field Communication, has allowed users to more or less completely replace credit cards with phones—yet the technology has languished in the U.S.

That delay has dragged on so long that at least one competing, not to mention superior, technology has reached maturity. Manufactured by Fujitsu under the trade name PalmSecure, it’s a system that requires no hardware on the user side. If you’ve got hands and you can wave them in front of a detector, you can use it to make purchases.

PalmSecure is a kind of identification / security scheme that falls under the umbrella of biometrics. Other biometric identifiers include your fingerprintvoice, iris, face, even the shape of your earlobe. Unlike those other measures, PalmSecure is uniquely unobtrusive. It’s literally the same gesture required to use an NFC phone wallet or to swipe a credit card, only you don’t have to have anything on your person to make it work.

A patient's hand imaged with near-infrared light

The technology is affordable enough that one Florida school district is already deploying it in its cafeterias to allow students to make purchases. It’s also being used to identify patients in New York University’s Langone Medical Center, where 250 scanners have been deployed at a total system cost of $200,000.

The technology is remarkably straightforward: near-infrared light shines up from a detector, allowing it to image the unique pattern of veins in a person’s hand. This pattern is stored as a unique identifier, not an image.

All that’s required to turn this system into a reliable payment mechanism is a service provider willing to link that unique identifier to a bank account or credit card. That’s not trivial, but if the rise of payment system pioneers like iPhone-based Square tells us anything, it’s that it’s at least possible.

Unlike phone wallets, which aren’t obviously superior to existing solutions like credit cards, a biometric-based payment system is not only more secure than existing cash alternatives, it actually has the potential to make the ones we already use more secure.

The unique pattern of veins in your hand can’t be stolen, for example, and neither can your credit card if you choose to leave it at home. A biometric marker could also be used as a second authentication factor for existing payment systems, virtually eliminating credit card fraud at physical stores.

If the slow rollout of NFC holds any lessons, it’s that breaking the monopoly of the existing payment system is difficult, especially when merchants bear the cost. But a biometric identification system could be a unique identifier that might justify its additional expense for some vendors. If you think waving your phone to pay for something is convenient enough to convince you to go to one coffee shop versus another, imagine how thrilled people will be to simply raise their hand?

Source | Technology Review

Evolutionary computation offers flexibility, insight

Tuesday, August 9th, 2011

Esmail Bonakdarian, Ph.D., an assistant professor of Computing Sciences and Mathematics at Franklin University, has developedan evolutionary computation approach that allows  researchers to search for models that can best explain experimental data derived from many types of applications, including economics.

Optimization of a search over subsets of a maximum model proceeds initially at a quick rate and then slowly continues to improve over time until it converges. The top curve (red) shows the optimum value found so far, while the lower, jagged line (green) shows the current average fitness value for the population in each generation

Bonakdarian employed his evolutionary computation approach to analyze data from two well-known, classical “public goods” problems from economics: When goods are provided to a larger community without required individual contributions, it often results in “free-riding”; but people also tend to show a willingness to cooperate and sacrifice for the good of the group.

He cautioned that if the number of independent variables is large, and there is no intuitive sense about the possible relationship between these variables and the dependent variable, “the experimenter may have to go on an automated ‘fishing expedition’ to discover the important and relevant independent variables.”

As an alternative, Bonakdarian suggests using an evolutionary algorithm as a way to “evolve” the best minimal subset with the largest explanatory value.

“This approach offers more flexibility as the user can specify the exact search criteria on which to optimize the model,” he said. “The user can then examine a ranking of the top models found by the system. In addition to these measures, the algorithm can also be tuned to limit the number of variables in the final model. We believe that this ability to direct the search provides flexibility to the analyst and results in models that provide additional insights.”

The Glenn IBM 1350 Opteron cluster at the Ohio Supercomputer Center (OSC) was used for the project.

Source | Kurzweil AI