Archive for the ‘Technology’ Category
A robot that can control both its own arm and a person’s arm to manipulate objects in a collaborative manner has been developed by Montpellier Laboratory of Informatics, Robotics, and Microelectronics (LIRMM) researchers, IEEE Spectrum Automation reports.
The robot controls the human limb by sending small electrical currents to electrodes taped to the person’s forearm and biceps, which allows the robot to command the elbow and hand to move. In the experiment, the person holds a ball, and the robot holds a hoop; the robot, a small humanoid, has to coordinate the movement of both human and robot arms to successfully drop the ball through the hoop.
The researchers say their goal is to develop robotic technologies that can help people suffering from paralysis and other disabilities to regain some of their motor skills.
Source | Kurzweil AI
Four Wave Gliders — self propelled robots, each about the size of a dolphin — left San Francisco on Nov. 17 for a 60,000 kilometer journey, IEEE Spectrum Automation reports.
Built by Liquid Robotics, the robots will use waves to power their propulsion systems and the Sun to power the sensors, as a capability demonstration. They will be measuring things like water salinity, temperature, clarity, and oxygen content; collecting weather data, and gathering information on wave features and currents.
The data from the fleet of robots is being streamed via the Iridium satellite network and made freely available on Google Earth’s Ocean Showcase.
Source | Kurzweil AI
“Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton,” said study leader Miguel Nicolelis, MD, PhD, professor of neurobiology at Duke University Medical Center and co-director of the Duke Center for Neuroengineering.
Sensing textures of virtual objects
Without moving any part of their real bodies, the monkeys used their electrical brain activity to direct the virtual hands of an avatar to the surface of virtual objects and differentiate their textures. Although the virtual objects employed in this study were visually identical, they were designed to have different artificial textures that could only be detected if the animals explored them with virtual hands controlled directly by their brain’s electrical activity.
The texture of the virtual objects was expressed as a pattern of electrical signals transmitted to the monkeys’ brains. Three different electrical patterns corresponded to each of three different object textures.
Because no part of the animal’s real body was involved in the operation of this brain-machine-brain interface, these experiments suggest that in the future, patients who were severely paralyzed due to a spinal cord lesion may take advantage of this technology to regain mobility and also to have their sense of touch restored, said Nicolelis.
First bidirectional link between brain and virtual body
“This is the first demonstration of a brain-machine-brain interface (BMBI) that establishes a direct, bidirectional link between a brain and a virtual body,” Nicolelis said.
“In this BMBI, the virtual body is controlled directly by the animal’s brain activity, while its virtual hand generates tactile feedback information that is signaled via direct electrical microstimulation of another region of the animal’s cortex. We hope that in the next few years this technology could help to restore a more autonomous life to many patients who are currently locked in without being able to move or experience any tactile sensation of the surrounding world,” Nicolelis said.
“This is also the first time we’ve observed a brain controlling a virtual arm that explores objects while the brain simultaneously receives electrical feedback signals that describe the fine texture of objects ‘touched’ by the monkey’s newly acquired virtual hand.
“Such an interaction between the brain and a virtual avatar was totally independent of the animal’s real body, because the animals did not move their real arms and hands, nor did they use their real skin to touch the objects and identify their texture. It’s almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves.”
The combined electrical activity of populations of 50 to 200 neurons in the monkey’s motor cortex controlled the steering of the avatar arm, while thousands of neurons in the primary tactile cortex were simultaneously receiving continuous electrical feedback from the virtual hand’s palm that let the monkey discriminate between objects, based on their texture alone.
Robotic exoskeleton for paralyzed patients
“The remarkable success with non-human primates is what makes us believe that humans could accomplish the same task much more easily in the near future,” Nicolelis said.
The findings provide further evidence that it may be possible to create a robotic exoskeleton that severely paralyzed patients could wear in order to explore and receive feedback from the outside world, Nicolelis said. The exoskeleton would be directly controlled by the patient’s voluntary brain activity to allow the patient to move autonomously. Simultaneously, sensors distributed across the exoskeleton would generate the type of tactile feedback needed for the patient’s brain to identify the texture, shape and temperature of objects, as well as many features of the surface upon which they walk.
This overall therapeutic approach is the one chosen by the Walk Again Project, an international, non-profit consortium, established by a team of Brazilian, American, Swiss, and German scientists, which aims at restoring full-body mobility to quadriplegic patients through a brain-machine-brain interface implemented in conjunction with a full-body robotic exoskeleton.
The international scientific team recently proposed to carry out its first public demonstration of such an autonomous exoskeleton during the opening game of the 2014 FIFA Soccer World Cup that will be held in Brazil.
Ref.: Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, and Miguel A. L. Nicolelis, Active tactile exploration using a brain–machine–brain interface, Nature, October 2011 [doi:10.1038/nature10489]
Source | KurzweilAI
When it happened, emotions flashed like lightning.
The nearby robotic hand that Tim Hemmes was controlling with his mind touched his girlfriend Katie Schaffer’s outstretched hand.
One small touch for Mr. Hemmes; one giant reach for people with disabilities.
Tears of joy flowing in an Oakland laboratory that day continued later when Mr. Hemmes toasted his and University of Pittsburgh researchers’ success at a local restaurant with two daiquiris.
A simple act for most people proved to be a major advance in two decades of research that has proven to be the stuff of science fiction.
Mr. Hemmes’ success in putting the robotic hand in the waiting hand of Ms. Schaffer, 27, of Philadelphia, represented the first time a person with quadriplegia has used his mind to control a robotic arm so masterfully.
The 30-year-old man from Connoquenessing Township, Butler County, hadn’t moved his arms, hands or legs since a motorcycle accident seven years earlier. But Mr. Hemmes had practiced six hours a day, six days a week for nearly a month to move the arm with his mind.
That successful act increases hope for people with paralysis or loss of limbs that they can feed and dress themselves and open doors, among other tasks, with a mind-controlled robotic arm. It’s also improved the prospects of wiring around spinal cord injuries to allow motionless arms and legs to function once again.
“I think the potential here is incredible,” said Dr. Michael Boninger, director of UPMC’s Rehabilitation Institute and a principal investigator in the project. “This is a breakthrough for us.”
Mr. Hemmes? They say he’s a rock star.
In a project led by Andrew Schwartz, Ph.D., a University of Pittsburgh professor of neurobiology, researchers taught a monkey how to use a robotic arm mentally to feed itself marshmallows. Electrodes had been shallowly implanted in its brain to read signals from neurons known to control arm motion.
Electrocorticography or ECoG — in which an electronic grid is surgically placed against the brain without penetration — less intrusively captures brain signals.
ECoG has been used to locate sites of seizures and do other experiments in patients with epilepsy. Those experiments were prelude to seeking a candidate with quadriplegia to test ECoG’s capability to control a robotic arm developed by Johns Hopkins University.
The still unanswered question was whether the brains of people with long-term paralysis still produced signals to move their limbs.
ECoG picks up an array of brain signals, almost like a secret code or new language, that a computer algorithm can interpret and then move a robotic arm based on the person’s intentions. It’s a simple explanation for complex science.
Mr. Hemmes’ name cropped up so many times as a potential candidate that the team called him to gauge his interest.
He said no.
He already was involved in a research in Cleveland and feared this project would interfere. But knowing they had the ideal candidate, they called back. This time he agreed, as long as it would not limit his participation in future phases of research.
Mr. Hemmes became quadriplegic July 11, 2004, apparently after a deer darted onto the roadway, causing him to swerve his motorcycle onto gravel where his shoulder hit a mailbox, sending him flying headfirst into a guardrail. The top of his helmet became impaled on a guardrail I-beam, rendering his head motionless while his body continued flying, snapping his neck at the fourth cervical vertebra.
A passer-by found him with blue lips and no signs of breathing. Mr. Hemmes was flown by rescue helicopter to UPMC Mercy and diagnosed with quadriplegia — a condition in which he had lost use of his limbs and his body below the neck or shoulders. He had to learn how to breathe on his own. His doctor told him it was worst accident he’d ever seen in which the person survived.
But after the process of adapting psychologically to quadriplegia, Mr. Hemmes chose to pursue a full life, especially after he got a device to operate a computer and another to operate a wheelchair with head motions.
Since January, he has operated the website — www.Pittsburghpitbullrescue.com — to rescue homeless pit bulls and find them new owners.
The former hockey player’s competitive spirit and willingness to face risk were key attributes. Elizabeth Tyler-Kabara, the UPMC neurosurgeon who would install the ECoG in Mr. Hemmes’ brain, said he had strong motivation and a vision that paralysis could be cured.
Ever since his accident, Mr. Hemmes said, he’s had the goal of hugging his daughter Jaylei, now 8. This could be the first step.
“It’s an honor that they picked me, and I feel humbled,” Mr. Hemmes said.
Mr. Hemmes underwent several hours of surgery to install the ECoG at a precise location against the brain. Wires running under the skin down to a port near his collarbone — where wires can connect to the robotic arm — caused him a stiff neck for a few days.
Two days after surgery, he began exhaustive training on mentally maneuvering a computer cursor in various directions to reach and make targets disappear. Next he learned to move the cursor diagonally before working for hours to capture targets on a three-dimensional computer.
The U.S. Food and Drug Administration allowed the trial to last only 28 days, when the ECoG is removed. The project, initially funded by UPMC, has received more than $6 million in funding from the Department of Veterans Affairs, the National Institutes of Health, and the U.S. Department of Defense’s Defense Advanced Research Projects Agency, known as DARPA.
Initially Mr. Hemmes tried thinking about flexing his arm to move the cursor. But he had better success visually grabbing the ball-shaped cursor to throw it toward a target on the screen. The “mental eye-grabbing” worked best when he was relaxed.
Soon he was capturing 15 of 16 targets and sometimes all 16 during timed sessions. The next challenge was moving the robotic arm with his mind.
The same mental processes worked, but the arm moved more slowly and in real space. But time was ticking away as the experiment approached its final days last month. With Mr. Hemmes finally moving the arm in all directions, Wei Wang — assistant professor of physical medicine and rehabilitation at Pitt’s School of Medicine who also has worked on the signalling system — stood in front of him and raised his hand.
The robotic arm that Mr. Hemmes was controlling moved with fits and starts but in time reached Dr. Wang’s upheld hand. Mr. Hemmes gave him a high five.
The big moment arrived.
Katie Schaffer stood before her boyfriend with her hand extended. “Baby,” she said encouraging him, “touch my hand.”
It took several minutes, but he raised the robotic hand and pushed it toward Ms. Schaffer until its palm finally touched hers. Tears flowed.
“It’s the first time I’ve reached out to anybody in over seven years,” Mr. Hemmes said. “I wanted to touch Katie. I never got to do that before.”
“I have tattoos, and I’m a big, strong guy,” he said in retrospect. “So if I’m going to cry, I’m going to bawl my eyes out. It was pure emotion.”
Mr. Hemmes said his accomplishments represent a first step toward “a cure for paralysis.” The research team is cautious about such statements without denying the possibility. They prefer identifying the goal of restoring function in people with disabilities.
“This was way beyond what we expected,” Dr. Tyler-Kabara said. “We really hit a home run, and I’m thrilled.”
The next phase will include up to six people tested in another 30-day trial with ECoG. A year-long trial will test the electrode array that shallowly penetrates the brain. Goals during these phases include expanding the degrees of arm motions to allow people to “pick up a grape or grasp and turn a door knob,” Dr. Tyler-Kabara said.
Anyone interested in participating should call 1-800-533-8762.
Mr. Hemmes says he will participate in future research.
“This is something big, but I’m not done yet,” he said. “I want to hug my daughter.”
Researchers at Queen Mary, University of London have delivered a common chemotherapy drug to cancer cells inside tiny microparticles. The drug reduced ovarian cancer tumors in an animal model by 65 times more than using the standard method. This approach is now being developed for clinical use.
The lead researcher Dr Ateh and colleagues found that by coating tiny microparticles of around 0.5 μm diameter with a special protein called CD95, they trigger cancer cells into ingesting these particles and deliver a dose of a common chemotherapy drug called paclitaxel.
The key to their success is that CD95 attaches to another protein called CD95L, which is found much more commonly on the surface of cancer cells than it is on normal healthy cells. Once attached, the cancer cells ingest CD95 and the microparticle with it. Inside the cell, the microparticle can unload its chemotherapy cargo, which kills the cell to reduce the size of the tumor.
The researchers are now advancing these studies and the start-up company BioMoti, which will develop the technology for clinical use, is planning to partner with established pharmaceutical companies for the clinical development of new treatments in specific types of cancer.
Ref.: Davidson D. Ateh et al., The intracellular uptake of CD95 modified paclitaxel-loaded poly(lactic-co-glycolic acid) microparticles, Biomaterials, August 2011
Source | KurzweilAI
Brain-Computer Interface for Disabled People to Control Second Life With Thought Available Commercially Next YearFriday, August 12th, 2011
This is an awesome use of a brain-computer interface developed for disabled people to navigate in the 3D virtual world of Second Life, using a simple interface controlled by the user’s thought:
Developed by an Austrian medical engineering firm called G.Tec, the prototype in the video above was released last year, but since New Scientist wrote about the project recently, and since it’s one of the few real world applications of Second Life that’s already showing tangible, scalable, incredibly important social results, I checked with the company for an update:
“The technology is already on the market for spelling,” G.Tec’s Christoph Guger tells me, pointing to a company called Intendix. “The SL control will be on the market in about one year.” I imagine there are many disabled people in SL right now who would benefit from this, and many more not in SL who could, once it’s on the market. (A Japanese academic created a similar brain-to-SL interface in 2007, but to my knowledge, there are no commercial plans for it as yet.)
Guger shared some insights on how the technology works, and the disabled volunteers who helped them develop it:
Above is a pic of the main G. Tec interface with all the basic SL commands. There are other UIs for chatting (with 55 commands) and searching (with 40 commands.)
Not surprisingly, Guger tells me their disabled volunteers enjoyed flying in Second Life most. “It is of course slower than with the keyboard/mouse,” Guger allows, “but the big advantage is that you appear as a normal user in SL, even if you are paralyzed.”
This brain-to-SL interface literally gives housebound disabled people a world to explore, and a means to meet and interact with as many people there, as live in San Francisco; that in itself is an absolute good. But beyond that, Guger sees other medical applications: “First of all you can use it for monitoring, if the patient is still engaged and as a tool to measure his performance. Beside that, it gives access to many other people, which would not be possible otherwise. New games are also developed for ADHD children for example.”
Source | New World Notes
Thirty years ago, IBM created the first personal computer running Microsoft’s MS-DOS. Today, IBM and Microsoft seem to have very different views on the future of the PC.
IBM CTO Mark Dean of the company’s Middle East and Africa division, one of a dozen IBM engineers who designed that first machine unveiled Aug. 12, 1981, says PCs are “going the way of the vacuum tube, typewriter, vinyl records, CRT and incandescent light bulbs.”
IN PICTURES: Evolution of the PC
IBM, of course, sold its PC division to Lenovo in 2005. Dean, in a blog post, writes that “I, personally, have moved beyond the PC as well. My primary computer now is a tablet. When I helped design the PC, I didn’t think I’d live long enough to witness its decline. But, while PCs will continue to be much-used devices, they’re no longer at the leading edge of computing.”
Dean’s remarks continue a debate over whether we are now in a so-called “post-PC” era, in whichsmartphones and tablets are replacing desktops and laptops. Not surprisingly, Microsoft — seller of 400 million Windows 7 licenses — isn’t a fan of that term.
“I prefer to think of it as the PC-plus era,” Microsoft corporate communications VP Frank Shaw writes in a blog post of his own.
In Microsoft’s vision, it’s the PC plus Bing, Windows Live, Windows phones, Office 365, Xbox, Skype and more.
A VISUAL HISTORY: Windows after 25 years
“Our software lights up Windows PCs, Windows Phones and Xbox-connected entertainment systems, and a whole raft of other devices with embedded processors from gasoline pumps to ATMs to the latest soda vending machines, to name just a few,” Shaw writes. “In some cases we build our own hardware (Xbox, Kinect), while in most other cases we work with hardware partners on PCs, phones and other devices to ensure a great end-to-end experience that optimizes the combination of hardware and software.”
Shaw notes that the Apple II, Commodore PET and other devices preceded the first IBM 5150 PC running MS-DOS but says it was the IBM and Microsoft partnership that “was a defining moment for our industry” and fulfilled “the dream of a PC on every desk and in every home.”
The first IBM PC even predates the Macintosh and Windows, which launched in 1984 and 1985, respectively. Shaw says he still owns his first computer, the IBM Personal Portable booting MS-DOS version 5.1.
Although Microsoft’s role in the daily lives of personal computer users could be diminished by the rise of iPhones, Android phones and iPads, IBM’s Dean says it’s not simply a new type of device that is replacing the PC as “the center of computing.”
“PCs are being replaced at the center of computing not by another type of device — though there’s plenty of excitement about smartphones and tablets — but by new ideas about the role that computing can play in progress,” Dean writes. “These days, it’s becoming clear that innovation flourishes best not on devices but in the social spaces between them, where people and ideas meet and interact. It is there that computing can have the most powerful impact on economy, society and people’s lives.”
While that sounds pretty vague, Dean notes that IBM has boosted its profit margins since selling off its PC division with a strategy of exiting commodity businesses and “expanding in higher-value markets.” One example: IBM’s Watson, newly crowned Jeopardy champion.
“We conduct fundamental scientific research, design some of the world’s most advanced chips and computers, provide software that companies and governments run on, and offer business consulting, IT services and solutions that enable our clients to transform themselves continuously, just like we do,” Dean writes.
For all the debate over whether this is a “post-PC” era, it’s clear more people today own Windows computers and Macs than smartphones and tablets, and our new mobile devices are complementing desktops and laptops rather than replacing them.
It’s hard to beat the convenience of an easy-to-use, Internet-connected device in one’s pocket, but many tasks are cumbersome without a full, physical keyboard. Even social media, which seems as “post-PC” as it gets upon first glance, requires a lot of typing.
Some people envision a future where a smartphone is the hub of all your computing needs, and simply hooks into a dock for those rare times you want a bigger screen, mouse and keyboard. Others talk about a future where any surface, whether a wall or table, is transformed into a touch-screen computer with a snap of one’s fingers.
For now, though, most people making these proclamations are typing their blog posts on PCs.
Source | Kurzweil AI
Since the 1930s, the theory of computation has profoundly influenced philosophical thinking about topics such as the theory of the mind, the nature of mathematical knowledge and the prospect of machine intelligence. In fact, it’s hard to think of an idea that has had a bigger impact on philosophy.
And yet there is an even bigger philosophical revolution waiting in the wings. The theory of computing is a philosophical minnow compared to the potential of another theory that is currently dominating thinking about computation.
At least, this is the view of Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology. Today, he puts forward a persuasive argument that computational complexity theory will transform philosophical thinking about a range of topics such as the nature of mathematical knowledge, the foundations of quantum mechanics and the problem of artificial intelligence.
Computational complexity theory is concerned with the question of how the resources needed to solve a problem scale with some measure of the problem size, call it n. There are essentially two answers. Either the problem scales reasonably slowly, like n, n^2 or some other polynomial function of n. Or it scales unreasonably quickly, like 2^n, 10000^n or some other exponential function of n.
So while the theory of computing can tell us whether something is computable or not, computational complexity theory tells us whether it can be achieved in a few seconds or whether it’ll take longer than the lifetime of the Universe.
That’s hugely significant. As Aaronson puts it: “Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number.”
He goes on to say that it’s easy to imagine that once we know whether something is computable or not, the problem of how long it takes is merely one of engineering rather than philosophy. But he then goes on to show how the ideas behind computational complexity can extend philosophical thinking in many areas.
Take the problem of artificial intelligence and the question of whether computers can ever think like humans. Roger Penrose famously argues that they can’t in his book The Emperor’s New Mind. He says that whatever a computer can do using fixed formal rules, it will never be able to ‘see’ the consistency of its own rules. Humans, on the other hand, can see this consistency.
One way to measure the difference between a human and computer is with a Turing test. The idea is that if we cannot tell the difference between the responses given by a computer and a human, then there is no measurable difference.
But imagine a computer that records all conversations it hears between humans. Over time, this computer will build up a considerable database that it can use to make conversation. If it is asked a question, it looks up the question in its database and reproduces the answer given by a real human.
In this way a computer with a big enough look up table can always have a conversation that is essentially indistinguishable from one that humans would have
“So if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory,” says Aaronson.
Instead, a more fruitful way forward is to think about the computational complexity of the problem. He points out that while the database (or look up table) approach “works,” it requires computational resources that grow exponentially with the length of the conversation.
Aaronson points out that this leads to a powerful new way to think about the problem of AI. He says that Penrose could say that even though the look up table approach is possible in principle, it is effectively impractical because of the huge computational resources it requires.
By this argument, the difference between humans and machines is essentially one of computational complexity.
That’s an interesting new line of thought and just one of many that Aaronson explores in detail in this essay.
Of course, he acknowledges the limitations of computational complexity theory. Many of the fundamental tenets of the theory, such as P ≠ NP, are unproven; and many of the ideas only apply to serial, deterministic Turing machines, rather than the messier kind of computing that occurs in nature.
But he says these criticisms do not allow philosophers (or anybody else) to arbitrarily dismiss the arguments of complexity theory. Indeed, many of these criticisms raise interesting philosophical questions in themselves.
Computational complexity theory is a relatively new discipline which builds on advances made in the 70s, 80s and 90s. And that’s why it’s biggest impacts are yet to come.
Aaronson points us in the direction of some of them in an essay that is thought provoking, entertaining and highly readable. If you have an hour or two to spare, it’s worth a read.
Source | Technology Review
Duke University engineer Nico Hotz has proposed a hybrid solar system in which sunlight heats a combination of water and methanol in a maze of tubes on a rooftop to produce hydrogen.
The device is a series of copper tubes coated with a thin layer of aluminum and aluminum oxide and partly filled with catalytic nanoparticles. A combination of water and methanol flows through the tubes, which are sealed in a vacuum.
Once the evaporated liquid achieves higher temperatures, tiny amounts of a catalyst are added, which produces hydrogen. This combination of high temperature and added catalysts produces hydrogen very efficiently, Hotz said. The resulting hydrogen can then be immediately directed to a fuel cell to provide electricity to a building during the day, or compressed and stored in a tank to provide power later.
After two catalytic reactions, the system produced hydrogen much more efficiently than current technology without significant impurities, Hotz said. The resulting hydrogen can be stored and used on demand in fuel cells.
“This set-up allows up to 95 percent of the sunlight to be absorbed with very little being lost as heat to the surroundings,” he said. “This is crucial because it permits us to achieve temperatures of well over 200 degrees Celsius within the tubes. By comparison, a standard solar collector can only heat water between 60 and 70 degrees Celsius.”
Holtz performed a cost analysis, comparing a standard photovoltaic cell, a photocatalytic system, and the hybrid solar-methanol system. He found that the hybrid system is the least expensive solution, with a total installation cost of $7,900 if designed to fulfill the requirements in summer.
Source | Kurzweil AI
The team studied two types of intelligence in more than 3,500 people from Scotland, England and Norway. They found that 40 to 50 percent of people’s differences in knowledge and problem solving skills could be traced to their genes. The study examined more than half a million genetic markers on every person’s DNA.
Previous studies on twins and adopted people suggested that there is a substantial genetic contribution to thinking skills. However, the new study is the first to test people’s DNA for genetic variations linked to intelligence.
Technical details of the study
The researchers conducted a genome-wide association study looking at over 500,000 common single nucleotide polymorphisms (SNPs), which are DNA sequence variations that occur when a single nucleotide (A,T,C,or G) in the genome sequence is altered. They correlated genetic variation of participants’ performance on two types of general intelligence: knowledge and problem-solving skills.
The researchers found that up to half of individual differences in intelligence are due to genetic variants in linkage disequilibrium with SNPs. (Individuals often inherit rather long haplotypes (chunks) of DNA from one parent or the other, and some haplotypes themselves may also be inherited as a group. This is called linkage disequilibrium.)
The researchers found that a large proportion of the heritability estimate of intelligence in adulthood can be traced to genetic variants linked with common SNPs, confirming that at least 40–50% of individual differences in human intelligence are due to genetic variation.
The findings were made possible using a new type of analysis invented by Professor Peter Visscher and colleagues in the Queensland Institute of Medical Research, Brisbane.
Source | Kurzweil AI
A wearable camera system makes it possible for motion capture to occur almost anywhere — in natural environments, over large areas, and outdoors, scientists at Disney Research, Pittsburgh (DRP), and Carnegie Mellon University (CMU) have shown.
The camera system reconstructs the relative and global motions of an actor, using a process called structure from motion (SfM) to estimate the pose of the cameras on the person.
SfM is also used to estimate rough position and orientation of limbs as the actor moves through an environment and to collect sparse 3-D information about the environment that can provide context for the captured motion. This serves as an initial guess for a refinement step that optimizes the configuration of the body and its location in the environment, resulting in the final motion capture result.
The researchers used Velcro to mount 20 lightweight cameras on the limbs and trunk of each subject. Each camera was calibrated with respect to a reference structure. Each person then performed a range-of-motion exercise that allowed the system to automatically build a digital skeleton and estimate positions of cameras with respect to that skeleton.
The quality of motion capture from body-mounted cameras does not yet match the fidelity of traditional motion capture, but will improve as the resolution of small video cameras continues to improve, the researchers said.
Source | Kurzweil AI