Archive for the ‘Language’ Category
The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel. Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter. Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.
There are reasons for this refocusing of the lens or eye toward body. Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago. The body truly is changing, both biologically and technologically, at an abrupt rate. Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation. Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today. They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves. The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.
We see in the work of Orlan and Stelarc that the body becomes the malleable canvas. Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution. In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body. Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings. Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world. Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses. Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body. Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth. Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.
Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine. Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance. Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes. Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.
The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind. In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order. Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing. Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations. Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world. The installation includes an array of visual stimulation in a dark environment. As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes. This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.
Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs. By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement. Here we see the body acting as brush; literally painting the soundscape through its own creative motion. As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body. For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.
Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense. Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with. Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state. The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide. Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time. As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge. Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self. Can we begin to mold and sculpt the body much like we have done with the land we inhabit?
There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori. The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement. In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities. Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain. As you browse the work of The Biological Canvas I would like to invite your own imagination to engage. Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.
Source | VASA Project
Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech — and discovered that they are the same as those identified in non-human primates.
With the help of 13 human volunteers who spent time in a functional MRI machine, the researchers showed that both human and non-human primates process speech along two parallel pathways, each of which run from lower to higher functioning neural regions. These pathways are dubbed the “what” and “where” streams and are roughly analogous to how the brain processes sight, but in different regions. The “where” stream localizes sound and the “what” pathway identifies the sound.
The researchers identified the three distinct areas in the “what” pathway in humans that had been seen in non-human primates. Only two had been recognized before in previous human studies. The first, and most primary, is the “core” that analyzes tones at the basic level of simple frequencies. The second area, the “belt,” wraps around the core, and integrates several tones, “like buzz sounds,” that lie close to each other, the researchers said. The third area, the “parabelt,” responds to speech sounds such as vowels, which are essentially complex bursts of multiple frequencies.
The discovery could offer important insights into what can go wrong when someone has difficulty speaking, which involves hearing voice-generated sounds, or understanding the speech of others, the researchers said.
Source | Kurzweil AI
It seems the sci-fi industry has done it again. Predictions made in novels like Johnny Mnemonic and Neuromancer back in the 1980s of neural implants linking our brains to machines have become a reality.
Back then it seemed unthinkable that we’d ever have megabytes stashed in our brain as Keanu Reeves’ character Johnny Mnemonic did in the movie based on William Gibson’s novel. Or that The Matrix character Neo could have martial arts abilities uploaded to his brain, making famous the line, “I know Kung Fu.” (Why Keanu Reeves became the poster boy of sci-fi movies, I’ll never know.) But today we have macaque monkeys that can control a robotic arm with thoughts alone. We have paraplegics given the ability to control computer cursors and wheelchairs with their brain waves. Of course this is about the brain controlling a device. But what about the other direction where we might have a device amplifying the brain? While the cochlear implant might be the best known device of this sort, scientists have been working on brain implants with the goal to enhance memory. This sort of breakthrough could lead to building a neural prosthesis to help stroke victims or those with Alzheimer’s. Or at the extreme, think uploading Kung Fu talent into our brains.
Decade-long work led by Theodore Berger at University of Southern California, in collaboration with teams from Wake Forest University, has provided a big step in the direction of artificial working memory. Their study is finally published today in the Journal of Neural Engineering. A microchip implanted into a rat’s brain can take on the role of the hippocampus—the area responsible for long-term memories—encoding memory brain wave patterns and then sending that same electrical pattern of signals through the brain. Back in 2008, Berger told Scientific American, that if the brain patterns for the sentence, “See Spot Run,” or even an entire book could be deciphered, then we might make uploading instructions to the brain a reality. “The kinds of examples [the U.S. Department of Defense] likes to typically use are coded information for flying an F-15,” Berger is quoted in the article as saying.
In this current study the scientists had rats learn a task, pressing one of two levers to receive a sip of water. Scientists inserted a microchip into the rat’s brain, with wires threaded into their hippocampus. Here the chip recorded electrical patterns from two specific areas labeled CA1 and CA3 that work together to learn and store the new information of which lever to press to get water. Scientists then shut down CA1 with a drug. And built an artificial hippocampal part that could duplicate such electrical patterns between CA1 and CA3, and inserted it into the rat’s brain. With this artificial part, rats whose CA1 had been pharmacologically blocked, could still encode long-term memories. And in those rats who had normally functioning CA1, the new implant extended the length of time a memory could be held.
The next step is to test the device in monkeys, and then in humans. Of course at this early stage a breakthrough like this brings up more questions than solutions. Memory is hugely complex, based on our individual experiences and perceptions. If we have the electrical pattern for the phrase, See Spot Run, mentioned above, would this mean the same thing for you as it does for me? How would such a device work within context? As writer Gary Stix asked in the Scientific American article, “Would “See Spot Run” be misinterpreted as laundry mishap instead of a trotting dog?” Or as the science journalist John Horgan once put it, you might hear your wedding song, but I hear a stale pop tune.
We are provided with the same structural blueprint for our brains, but its circuitry is built from experience and genetics, and this is a tapestry unique to each of us. Something that many scientists feel we’ll never be able to fully crack and decode, let alone insert into it an experiential memory.
Source | Smart Planet
Researchers at the University of Pittsburgh have developed a novel technology to precisely modulate individual neurons in rats, allowing the molecular, neuronal, and circuit functions to be analyzed with unprecedented precision.
The researchers demonstrated a novel way of loading specific drugs onto an array of electrodes and triggering their release into cultured rat neurons, allowing for more precise insight into the cellular mechanisms of neuronal networks. They also showed how the release of drugs could be informed, in real-time, by recording activity in neurons, a step essential for creating a closed-loop system that both diagnoses and treats symptoms simultaneously.
“We envision an implanted device in the future that will monitor brain activity, detect or predict an onset of an epileptic seizure, and send the command to the electrode at the most appropriate location, releasing an anti-convulsive drug to prevent the seizure,” says Professor Tracy Cui.
Source | Kurzweilai.net
In his landmark 1950 paper “Computing Machinery and Intelligence,” the mathematician, philosopher and code breaker Alan Turing proposed a method for answering the question “Can machines think?”: an “imitation game” in which an “interrogator,” C, interviews two players, A and B, via teleprinter, then decides on the basis of the exchange which is human and which is a computer.
Turing’s radical premise was that the question “Can a machine win the imitation game?” could replace the question “Can machines think?” — an upsetting idea at the time, as the neurosurgeon Sir Geoffrey Jefferson asserted in 1949: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain — that is, not only write it but know that it had written it.” Turing demurred: if the only way to be certain that a machine is thinking “is to be the machine and to feel oneself thinking,” wouldn’t it follow that “the only way to know that a man thinks is to be that particular man”? Nor was the imitation game, for Turing, a mere thought experiment. On the contrary, he predicted that in 50 years, “it will be possible to program computers . . . to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
Well, he was almost right, as Brian Christian explains in “The Most Human Human,” his illuminating book about the Turing test. In 2008, a computer program called Elbot came just one vote shy of breaking Turing’s 30 percent silicon ceiling. The occasion was the annual Loebner Prize Competition, at which programs called “chatterbots” or “chatbots” face off against human “confederates” in scrupulous enactments of the imitation game. The winning chatbot is awarded the title “Most Human Computer,” while the confederate who elicits “the greatest number of votes and greatest confidence from the judges” is awarded the title “Most Human Human.”
It was this title that Christian — a poet with degrees in computer science and philosophy — set out, in 2009, to win. And he was not about to go “head-to-head (head-to-motherboard?) against the top A.I. programs,” he writes, without first getting, as it were, in peak condition. After all, for Elbot to have fooled the judges almost 30 percent of the time into believing that it was human, its rivals had to have failed almost 30 percent of the time to persuade the judges that they were human. To earn the “Most Human Human” title, Christian realized, he would have to figure out not just why Elbot won, but why humanity lost.
His quest is, more or less, the subject of “The Most Human Human,” an irreverent picaresque that follows its hero from the recondite arena of the “Nicomachean Ethics” to the even more recondite arena of legal deposition to perhaps the most recondite arena of all, that of speed dating — and on beyond zebra. What Christian learns along the way is that if machines win the imitation game as often as they do, it’s not because they’re getting better at acting human; it’s because we’re getting worse.
Take, for example, the loathsome infinite regress of telephone customer service. You pummel your way through a blockade of menu options only to find that the live operator, once you reach her, talks exactly like the automated voice you’re trying to escape. And why is this? Because, Christian discovers, that’s how operators are trained to talk. Nor is this emulation of the electronic limited to the commercial realm. In chess, he notes, the “victory” of the computer program Deep Blue over Garry Kasparov had the paradoxical effect of convincing a whole generation of young chess players that the route to a grandmaster title was through rote memorization of famous matches. Whereas in the past these chess players might have dreamed of growing up to be Kasparov, master of strategy, now they dream of growing up to be Deep Blue, master of memory.
So how do you win the imitation game? “Just be yourself,” a past confederate advises Christian. But what does it mean to “be yourself”? In pursuing the question, Christian finds his way to Nietzsche, who “held the startling opinion that the most important part of ‘being oneself’ was — in the Brown University philosopher Bernard Reginster’s words — ‘being one self, any self.’ ” Which, as it turns out, is immensely challenging for computers. For instance, to circumvent the difficulty, the program known as Cleverbot “crowdsources” selfhood, borrowing intelligence from the humans who visit its Web site; it’s from this “conversational purée” that it draws its remarks and retorts, thereby generating the illusion of what Christian calls “coherence of identity.” But while Cleverbot can speak persuasively about “the things to which there is a right answer independent of the speaker,” if you ask it where it lives, “you get a pastiche of thousands of people talking about thousands of places.” What you realize, in other words, isn’t that “you aren’t talking with a human,” but that “you aren’t talking with a human.”
And that’s precisely the difficulty. In a wiki-age that privileges the collective over the personal, Christian suggests, we have become tone deaf to the difference between the human voice and the chatbot voice. Nor is the effect limited to the Loebner Prize. From smartphones whose predictive-text algorithms auto-correct the originality out of our language (“the more helpful our phones get, the harder it is to be ourselves”) to “super-automatic” espresso machines that sidestep the nuanced maneuvers of the human barista, technology militates against Ford Madox Ford’s “personal note,” Nietzsche’s “single taste”: against selfhood itself.
Christian is at his best when he is at his most hortatory. “Cobbled-together bits of human interaction do not a human relationship make,” he inveighs early on. “Not 50 one-night stands, not 50 speed dates, not 50 transfers through the bureaucratic pachinko. No more than sapling tied to sapling, oak though they may be, makes an oak. Fragmentary humanity isn’t humanity.” And later: “For everyone out there fighting to write idiosyncratic, high-entropy, unpredictable, unruly text, swimming upstream of spell-check and predictive auto-completion: Don’t let them banalize you.”
As “The Most Human Human” demonstrates, Christian has taken his own words to heart. An authentic son of Frost, he learns by going where he has to go, and in doing so proves that both he and his book deserve their title.
David Leavitt’s books include “The Man Who Knew Too Much: Alan Turing and the Invention of the Computer” and “The Indian Clerk.”
Source | New York Times
Gilbert Ryle once wrote that:
engineers stretch, twist, compress and batter bits of metal until they collapse, but it is just by such tests that they determine the strains which the metal will withstand. In somewhat the same way, philosophical arguments bring out the logical powers of the ideas under investigation, by fixing the precise forms of logical mishandling under which they refuse to work.
If that’s the work of philosophy, then Artificial Intelligence (AI) is one of philosophy’s branches. Rod Brooks, for many years director of MIT’s AI Lab, and one of AI’s great plain talkers, not to mention visionaries, defines artificial intelligence something like this: it’s when a machine does something that, if it were done by a person, we’d say it was intelligent, thoughtful, or human.
Wait a second! What does “what we would say” have to do with whether a machine is thinking?
But that’s just the point. AI is applied philosophy. AI curates opportunities for us to think about what we would say about the hard cases. At its best, AI gives us new hard cases. That’s what IBM’s, Jeopardy-winning Watson is.
But first, a real-world case: ants remove their dead from the nest and so avoid contamination. This looks like smart behavior. Now dead ants, it turns out, give off oleic acid, and experimenters have been able to demonstrate that ants will eject even live healthy ants from the nest if (thanks to meddling scientists) they have been daubed with oleic acid. What had at first appeared to be a sensitive response of the ants to the threat of harmful bacteria turns out to be a brute response triggered by the presence of a chemical.
Is the ant smart? Or stupid? Maybe neither. Or, most intriguingly of all, maybe it is both? Is there an experimentum crucis that we might perform to settle a question like this once and for all?
No. Intelligence isn’t like that. It isn’t something that happens inside the bug, or inside us. If intelligence is anything, it is an appropriate and autonomous responsiveness to the world around us. Flexible, real-time sensitivity to actual situations is what we have in mind when we talk about intelligence. And this means that intelligence is always going to be not just a matter of degree, but one of interpretation.
So back to Watson: it won! Watson produced answers to real questions, and it did so quickly and in ways that could only dimly be anticipated or understood by its designers. It beat its human opponents! This is a stunning achievement. A dazzling display of real-world, real-time responsiveness in action. Watson can think!
But hold on. Not so fast. Even if Watson is bristling and buzzing with intelligence, we can legitimately wonder whether it’s the natural intelligence of its programmers that is in evidence, rather than that of Watson.
And then there’s the issue of that little pronoun. People wonder whether it’s legitimate to talk of Watson as a He, but really the more pressing question is whether we can even speak of an It. In an important sense, there is no Watson. If Watson is a machine, then it is a machine in the way that a nuclear power plant is a machine. Watson is a system, a distributed local network. The avatar, the voice, the name — these are sleights of hand. The Watson System is staged to manipulate strings of symbols which have no meaning for it. At no point, any where in its processes, does the meaning, or context, or point of what it is doing, ever get into the act. The Watson System no more understands what’s going on around it, or what it is itself doing, than the ant understands the public health risks of decomposition. It may be a useful tool for us to deploy (for winning games on Jeopardy, or diagnosing illnesses, or whatever — mazal tov!), but it isn’t smart.
But again, we need to be slow down. Think of the ants once more. The ants do have good reasons to eject the oleic acid ants from the nest, even if they aren’t clever enough to understand that they do. Natural selection built the ants to act in accord with reasons they cannot themselves understand. And so with Watson. The IBM design team led by David Ferrucci built Watson to act as if it understood meanings that are, in fact, not available to it. And maybe that’s the upshot of what Dan Dennett has called Darwin’s dangerous idea; that’s the way, the only way, meaning and thinking gets into the world, through natural (or artificial) design. Watson is surely nothing like us, as we fantasize ourselves to be. But if Darwin and Dennett are right, we may turn out to be a lot more like Watson than we ever imagined.
Whatever we say about Dennett’s elegant and beautiful theory, you’d have to be drunk on moonshine to take seriously the idea that Watson exhibits a human-like mindfulness. And the reason is, the Watson System fails to exhibit even an animal-like mindfulness.
Remember: animals are basically plants that move. Plants are deaf, blind and dumb. Vegetables have little option but to take what comes. Animals, in contrast, can give chase, take cover, seek out both prey and mates, and hide from predators. Animals need to be perceptually sensitive to the places they happen to find themselves, and they need to make choices about what they want or need. In short, animals need to be smart.
Now here’s the rub. Watson, biologically speaking, if you get my drift, is a plant. Watson is big and it is rooted. Like all plants, it is deaf, blind, and immobile; it is basically incapable of directing action of any kind on the world around it. But now we come up against Ryle’s question as to just how much logical mishandling the concept of intelligence can tolerate. For it is right there — in the space that opens up between the animal and the world, in the situations that require of the animal that it shape and guide and organize its own actions and interactions with its surroundings — that intelligence ever enters the scene.
It’s important to appreciate that language is no work-around here. Language is just one of the techniques animals use to manage their dealings with the world around them. Giving a plant a camera won’t make it see, and giving it language won’t let it think. Which is just a way of reminding us that Watson understands no language. Unlike the ant, who acts as though it has reasons for its actions, Watson acts like a plant that talks.
Source | NPR
Hisashi Ishihara, Yuichiro Yoshikawa, and Prof. Minoru Asada of Osaka University in Japan have developed a new child robot platform called Affetto. Affetto can make realistic facial expressions so that humans can interact with it in a more natural way.
Prof. Asada is the leader of the JST ERATO Asada Project and his team has been working on “cognitive developmental robotics,” which aims to understand the development of human intelligence through the use of robots. (Learn more about the research that led to Affetto in this interview with Prof. Asada.)
Affetto is modeled after a one- to two-year-old child and will be used to study the early stages of human social development. There have been earlier attempts to study the interaction between child robots and people and how that relates to social development, but the lack of realistic child appearance and facial expressions has hindered human-robot interaction, with caregivers not attending to the robot in a natural way.
Here are some of the expressions that Affetto can make to share its emotions with the caregiver.
The researchers presented a paper describing the development of Affetto’s head at the 28th Annual Conference of the Robotics Society of Japan last year.
Source | IEEE Spectrum
Between the global economic downturn and stubborn unemployment, the last few years have not been kind to the workforce. Now a new menace looms. At just five feet tall and 86 pounds, the HRP-4 may be the office grunt of tomorrow. The humanoid robot, developed by Tokyo-based Kawada Industries and Japan’s National Institute of Advanced Industrial Sciences and Technology, is programmed to deliver mail, pour coffee, and recognize its co-workers’ faces. On Jan. 28, Kawada will begin selling it to research institutions and universities around the world for about $350,000. While that price may seem steep, consider that the HRP-4 doesn’t goof around on Facebook, spend hours tweaking its fantasy football roster, or require a lunch break. Noriyuki Kanehira, the robotic systems manager at Kawada, believes the HRP-4 could easily take on a “secretarial role…in the near future.” Sooner or later, he says, “humanoid robots can move [into] the office field.”
Robotic workers aren’t completely new. General Motors (GM) employed one on an assembly line in 1961, and—according to World Robotics, an annual report produced by the Frankfurt-based International Federation of Robotics—there are currently 8.6 million robots in use around the world. Many of them have been doing jobs that humans can’t do in places humans can’t go, such as plugging oil leaks in the Gulf of Mexico. As a result of breakthroughs in technology, however, a new breed of machines may soon be filing papers and pushing the mail cart. In a 2007 issue of Scientific American, Bill Gates predicted that the future would bring a “robot in every home.” In the foreseeable future, though, it may be a robot in every cubicle—or at least every third cubicle.
Industrial and technological companies across the globe are already hard at work trying to make this a reality. The QB, a “remote presence robot” created by Anybots, based in Mountain View, Calif., is basically a videoconferencing system on wheels. The QB, which looks a little like Wall-E, is controlled remotely through a Web browser and keyboard, allowing managers to virtually visit satellite branches from the comfort of their offices. The $15,000 QB was unveiled in May, and according to Anybots’ founder, Trevor Blackwell, sales are in the hundreds. “Everyone already has videoconferencing,” says Blackwell. “Yet planes are still full of people traveling for business. We’re trying to find a way to solve that problem.”
For around the same price, Smart Robots, based in Dalton, Mass., offers a more ambitious office robot called the SR4. Models range from an $7,495 SR4 Professional to the $18,950 SR4 Office, which resembles R2-D2 with a clear glass top. Joe Bosworth, Smart Robots’ chief executive officer, says the SR4 is just as smart as C-3PO’s little buddy. “I would describe it as a gofer,” he says. “A point-to-point robot should be able to go from any desk to any desk within a multistory office. It should be able to take mail down to the mailroom and then travel across the street to pick up a latte.” Bosworth, who has been involved with Smart Robots since 2002, anticipates criticism from those claiming the SR4 is just a fancy way of replacing human employees. “Are there humanoids—sorry, humans—who do these kinds of things in larger offices? Absolutely,” Bosworth concedes. “Is this intended to displace them entirely? Not really. But does it in fact save some labor in certain circumstances? Yes.”
For businesses with deeper pockets, there’s the PR2, a “personal robot” developed by Willow Garage, a robotics research group in Menlo Park, Calif., founded by Scott Hassan, one of the original architects of the Google (GOOG) search engine. PR2 officially hit the market last September for $400,000, and Samsung became one of its first customers. Unlike more affordable office robots, the five-foot-tall, two-armed, rolling PR2 can do remedial problem solving, open doors without instruction, and plug itself into a wall socket when its battery is running low. And as seen in Garage’s video demonstrations, it can fetch a beer from the fridge and play a mean game of pool. Soon enough, people won’t even need real friends.
When it comes to trepidation about robots entering the workplace, Tim Smith, a spokesperson for Willow Garage, takes a historical approach. “People always seem to fear new technology,” he says. “I suspect Ben Franklin got a lot of grief when he started the post office and suddenly the government knew where everybody lived.” Like them or not, Willow Garage is betting that once robots enter the workforce, companies won’t ignore the technology. “When you add a mobile-manipulation robot like the PR2, things get really exciting,” says Joshua Smith, an associate professor of science and engineering at the University of Washington in Seattle. “A robot makes it possible for software to move objects around and change the physical state of the building.” Notes Willow Garage’s Smith: “If the U.S. wants a hand in this market, now isn’t the time for restrictions on robots based on silly, ungrounded fears.”
Hyoun Park, an analyst at the Boston-based technology research firm Aberdeen Group, agrees that there’s nothing to fear from robots but fear itself. “The current state of robotics is less suited to replacing employees in a downturn,” he claims, “and more suited to the cliché of doing more with less.” Not everyone believes the coming nuts-and-bolts workforce is completely benign. Entrepreneur Marshall Brain—that’s his real name—says robots will become widely available by 2030 and could eventually take nearly half of all jobs in the U.S. “We’ve been very busy creating the second intelligent species,” he says.
Brain, who sold his website HowStuffWorks to the Discovery Channel for $250 million in 2007, suggests that robots are a threat to employees at all levels on the corporate totem pole. Even higher-level thinking—the very quality that many managers say separates them from their staff and from artificial intelligence—can be broken down into easily replicated formulas. “Management is one area where a dispassionate robot that’s able to disperse tasks and evaluate employee performance in a perfectly rational way might do a better job than a human,” he says.
The office robot is closing in on upper management-level skills with surprising speed. At Georgia Tech, research engineer Alan Wagner has been collaborating with professor Ronald Arkin on cracking the code of robot intelligence. According to Wagner, their research aims “to build robots that can not only interact with humans but are also capable of representing, reasoning, and developing relationships with others.” They developed an algorithm that, they claim, allows robots, just like CEOs, “to look at a situation and determine whether [it] requires deception, providing false information, to benefit itself.” Basically, they taught robots how to lie.
This potential for duplicity may be even more alarming to human employees who might one day lose their jobs to a gang of Wall-E doppelgängers. Yet Smart Robots’ Bosworth insists that the rise of ever-more sophisticated robots offers a sliver of hope to minions everywhere. “Technology makes jobs, it doesn’t do away with jobs,” he says. Bosworth points to the invention of the television, which created a new industry in television repair. He predicts that there’s a fortune to be made in preventive robot maintenance. This isn’t the best news for those with bigger career ambitions than being grease monkeys for robots. Though in a market where robots may be taking all the good jobs, work is work, right?
Source | Business Week
It will be man vs. machine part 2 on Jeopardy next month, and if Thursday’s friendly matchup was a sign of things to come, the humans are in trouble. Former Jeopardy champions Ken Jennings and Brad Rutter lost by $1000 to IBM’s Watson DeepQA-based supercomputer during a three-category Jeopardy practice round Thursday night. The trio will officially square off for a $1 million grand prize during two Jeopardy matches that will air February 14-16.
Who is Watson?
Watson is IBM’s latest supercomputer based on the company’s DeepQA software, which combines natural language processing, machine learning and information retrieval. The device is packing 15 terabytes RAM, about 2,880 processor cores that can perform 80 trillion operations a second, and is the size of 10 refrigerators according to Wired. Watson will have to rely on its self-contained databases for answers, and won’t be hooked up to the Internet during the Jeopardy challenge.
On stage, a computer monitor will be the only part of Watson people will see, and just like his competitors Watson will have to trigger a buzzer before it can answer a question. When Alex Trebek calls on Watson it will answer in a computer generated voice that is eerily reminiscent of HAL from 2001: A Space Odyssey.
Watson’s Tough Road Ahead
Many experts are saying the challenges Watson had to overcome to play Jeopardy are far more complex than the challenges IBM’s chess-playing computer, Deep Blue, faced when it defeated chess grandmaster Gary Kasparov in 1997.
In Chess, there are only so many possible moves you can make to respond to your opponent, and all your moves are defined within a clear set of rules.
Jeopardy, on the other hand, will require Watson to handle a much bigger challenge: decoding human language with all its nuances, implied meanings and colloquial expressions. As David Ferrucci, IBM’s lead manager for Watson, recently told IDG News, “Natural language processing is so difficult because of the many different ways the same information can be expressed.”
That’s why the mere fact that Watson is able to compete in Jeopardy–let alone win–is considered a significant milestone for artificial intelligence. If you want to learn more, check out IDG News’ great write up on the challenges Watson faces next month.
Also, check out this Engadget video from Thursday night’s practice round:
Source | PC World
Studies into dolphin behaviour have highlighted how similar their communications are to those of humans and that they are brighter than chimpanzees. These have been backed up by anatomical research showing that dolphin brains have many key features associated with high intelligence.
The researchers argue that their work shows it is morally unacceptable to keep such intelligent animals in amusement parks or to kill them for food or by accident when fishing. Some 300,000 whales, dolphins and porpoises die in this way each year.
Dolphins have long been recognised as among the most intelligent of animals but many researchers had placed them below chimps, which some studies have found can reach the intelligence levels of three-year-old children. Recently, however, a series of behavioural studies has suggested that dolphins, especially species such as the bottlenose, could be the brighter of the two. The studies show how dolphins have distinct personalities, a strong sense of self and can think about the future.
It has also become clear that they are “cultural” animals, meaning that new types of behaviour can quickly be picked up by one dolphin from another.
In one study, Diana Reiss, professor of psychology at Hunter College, City University of New York, showed that bottlenose dolphins could recognise themselves in a mirror and use it to inspect various parts of their bodies, an ability that had been thought limited to humans and great apes.
In another, she found that captive animals also had the ability to learn a rudimentary symbol-based language.
Other research has shown dolphins can solve difficult problems, while those living in the wild co-operate in ways that imply complex social structures and a high level of emotional sophistication.
In one recent case, a dolphin rescued from the wild was taught to tail-walk while recuperating for three weeks in a dolphinarium in Australia.
After she was released, scientists were astonished to see the trick spreading among wild dolphins who had learnt it from the former captive.
There are many similar examples, such as the way dolphins living off Western Australia learnt to hold sponges over their snouts to protect themselves when searching for spiny fish on the ocean floor.
Such observations, along with others showing, for example, how dolphins could co-operate with military precision to round up shoals of fish to eat, have prompted questions about the brain structures that must underlie them.
When it comes to intelligence, however, brain size is less important than its size relative to the body.
What Marino and her colleagues found was that the cerebral cortex and neocortex of bottlenose dolphins were so large that “the anatomical ratios that assess cognitive capacity place it second only to the human brain”. They also found that the brain cortex of dolphins such as the bottlenose had the same convoluted folds that are strongly linked with human intelligence.
Such folds increase the volume of the cortex and the ability of brain cells to interconnect with each other. “Despite evolving along a different neuroanatomical trajectory to humans, cetacean brains have several features that are correlated with complex intelligence,” Marino said.
Marino and Reiss will present their findings at a conference in San Diego, California, next month, concluding that the new evidence about dolphin intelligence makes it morally repugnant to mistreat them.
Thomas White, professor of ethics at Loyola Marymount University, Los Angeles, who has written a series of academic studies suggesting dolphins should have rights, will speak at the same conference.
“The scientific research . . . suggests that dolphins are ‘non-human persons’ who qualify for moral standing as individuals,” he said.
Source | Current
Carnegie Mellon University researchers have found that within the brain’s neocortex lies a subnetwork of highly active neurons that behave much like people in social networks.
Like Facebook, these neuronal networks have a small population of highly active members who give and receive more information than the majority of other members, says Alison Barth, associate professor of biological sciences at Carnegie Mellon and a member of the Center for the Neural Basis of Cognition (CNBC). By identifying these neurons, scientists will now be able to study them further and increase their understanding of the neocortex, which is thought to be the brain’s center of higher learning.
Up to trillions of neurons make up the neocortex, the part of the cerebral cortex that is responsible for a number of important functions, including sensory perception, motor function, spatial reasoning, conscious thought and language. Although neuroscientists have been studying the neocortex for 40 years, technologies had only allowed them to look broadly at general areas of the brain, but not at the high-resolution of individual neurons. While they believed only a small proportion of neurons were doing most of the work in the neocortex, they couldn’t see if this was indeed the case.
In the current study, published in the journal Neuron, the researchers used a specialized transgenic mouse model developed by Barth to overcome these challenges and clearly see which neocortical neurons were the most active. The model links green fluorescent protein (GFP) with the activity-dependent gene fos, causing the neuron to light up when it is activated. The researchers, including former Carnegie Mellon and CNBC postdoctoral student Lina Yassin, who is now at the Ludwig-Maximillians-Universtat Munich, took recordings from both fos-labeled and unlabeled neurons and found that the most active neurons were expressing the fos gene. The researchers were then able to isolate the active neurons using imaging techniques and take electrophysiological recordings from the neurons, allowing the researchers to begin to understand the mechanisms underlying the increased activity.
Barth and colleagues were able to see that the fos-expressing neurons weren’t more active because they were intrinsically more excitable; in fact, the neurons seemed to be calmer or more suppressed than their neighboring, inactive neurons. What made them more active was their input.
According to Barth, it seems that this active network of neurons in the neocortex acts like a social network. There is a small, but significant, population of neurons that are more connected than other neurons. These neurons do most of the heavy lifting, giving and receiving more information than the rest of the neurons in their network.
“It’s like Facebook. Most of your friends don’t post much — if at all. But, there is a small percentage of your friends on Facebook who update their status and page often. Those people are more likely to be connected to more friends, so while they’re sharing more information, they’re also receiving more information from their expanded network, which includes other more active participants,” Barth said.
The findings stand to have a dramatic impact on neuroscience. Now that researchers are able to identify and visualize these active cells they can begin to determine why they are more active and how stable the activity is. The Carnegie Mellon researchers plan to study these neurons to see what, if any, role these neurons play in learning.
The results also will help to further computational neuroscience, specifically in the area of sparse coding. In sparse coding, scientists hope to study how the brain recruits a small population of neurons to encode information. This research will for the first time allow for the study of the electrophysiological properties of strongly responsive but sparsely populated cells.
Source | Carnegie Mellon