Archive for February, 2010

Sports Enhancement and Life Enhancement: Different Rules Apply

Sunday, February 28th, 2010

sports-enhancement1.jpgIf you want to see the future debate over human enhancement, look no further than today’s sports. The modern athlete is a highly-enhanced creature. Whatever physiological edge you can get may provide the razor-thin margin for victory in contemporary sports. And with more ways of modifying the body come more restrictions, and innovations to get around the restrictions.

Athletes may very well be leading the rest of society into the debate about who, how, and why people will be allowed — or even required — to enhance their bodies.

Elite players get it all: performance-enhancing drugs, surgeries, gadgetry, specialized equipment, even mathematical analysis to help them perform their desired tasks. They are monitored and modeled, tested and retested, sorted and classified. The modern elite player is an isolated cyborgian construct with barely room for a life and identity away from their sport.

Current attitudes towards enhancements vary wildly. Some enhancements are considered the price you pay to get in the game; others, the worst type of cheating. Certain dangerous acts are considered wrong while others are considered honorable. Some seem arcane while others could be useful to anyone and everyone. These attitudes tend to polarize — a new injectable hormone will quickly become anathema, but seeking multiple LASIK eye surgeries to get better than 20/20 vision is a professional responsibility.

Form matters at least as much as outcome. Take the case of Erythropoietin, or EPO. You make EPO to regulate the number of red blood cells you have, and therefore how readily you can get oxygen to your muscles. Injections of synthetic Erythropoietin to boost performance are a major no-no in sports. It’s considered blood doping. But athletes can produce EPO another way: by sleeping in a hypobaric chamber. This reduces oxygen and air pressure to what it would be somewhere 10,000-15,000 feet above sea level. The body responds by producing its own EPO — and lots of it — to get as much oxygen to the sleeping muscles as it can in the deprived environment. After a few weeks in one of these chambers, training in the thick O2 bath at sea level is a breeze. And sleeping in a hypobaric chamber would not be considered cheating any more than pitching a tent halfway up Everest.

Another instructive example is Tommy John surgery, an operation that replaces the ligament in the elbow that tends to suffer most in baseball pitchers. This surgery lets them pitch harder for longer, and despite being a major surgical modification, it isn’t viewed negatively. On the other hand, strengthening the arms by supplementing with a combination of testosterone and weight training is prohibited.

This may seem hypocritical, but it isn’t. After all, the rules of sports are arbitrary. Why shouldn’t you use your hands in soccer? Because then it’s not soccer. What makes a hypobaric chamber OK, but an injection a firing offense? Because we said so. After we invented agriculture, the bow, or perhaps mountaintop mining equipment, human athletics became a cultural pastime rather than a vital function. No matter how much you love your local sports team, the stakes aren’t what they once were. You will not be starved for protein through the long winter if Barry Bonds isn’t hitting like he used to. Thusly, we can pick the rules we like. They don’t have to be consistent with anything in the real world.

This is why applying the debate about sports enhancements to the rest of the world can be dangerous. When we’re deciding if we should give Modafinil to pilots or Ritalin to grad students, we’re making life and death choices about what our future will look like. The questions that arise around sports enhancement — questions about the player’s quality of life, autonomy and freedom, or questions around gauging acceptable risk — can help to inform a wider debate on enhancement, as long as we keep those aspects related to arbitrary rules back where they belong — in pastimes.

Source | H+ Magazine

Optical system promises to revolutionize undersea communications

Sunday, February 28th, 2010

1-opticalsyste.jpgAlong with the “transfer [of] real-time video from un-tethered [submerged] vehicles” up to support vessels on the surface, “this combination of capabilities will make it possible to operate self-powered ROVs [remotely operate vehicles] from surface vessels without requiring a physical connection to the ROV,” says WHOI Senior Engineer Norman E. Farr, who led the research team. This will not only represent a significant technological step forward, but also promises to reduce costs and simplify operations, they say.

An artist’s conception of how the optical modem could function at a deep ocean cabled observatory. Autonomous underwater vehicles (AUVs) collect sonar images (downward bands of light) and other data at a hydrothermal vent site and transmit the data through an optical modem to receivers stationed on moorings in the ocean. The moorings are connected to a cabled observatory, and the data are sent back to scientists on shore. Scientists, in turn, can send new instructions to the AUVs via the optical modem as well. (E. Paul Oberlander, Woods Hole Oceanographic Institution)

Their report will be presented Feb. 23 at the 2010 Meeting in Portland Ore.

Compared to communication in the air, communicating underwater is severely limited because water is essentially opaque to except in the visible band. Even then, light penetrates only a few hundred meters in the clearest waters; less in sediment-laden or highly populated waters.

Consequently, acoustic techniques were developed, and are now the predominant mode of underwater communications between ships and smaller, autonomous and robotic vehicles. However, acoustic systems—though capable of long-range communication—transmit data at limited speeds and delayed delivery rates due to the relatively slow speed of sound in water.

Now, Farr and his WHOI team have developed an system that complements and integrates with existing acoustic systems to enable data rates of up to 10-to-20 megabits per second over a range of 100 meters using relatively low battery power with small, inexpensive transmitters and receivers.

The advance will allow near-instant data transfer and real-time video from un-tethered ROVs and autonomous underwater vehicles (AUVs) outfitted with sensors, cameras and other data-collecting devices to surface ships or laboratories, which would require only a standard UNOLS cable dangling below the surface for the relaying of data.

This would represent a significant advance, Farr says, in undersea investigations of anything from the acidity of water to indentifying marine life to observing erupting vents and seafloor slides to measuring numerous ocean properties. In addition, the optical system would enable direct maneuvering of the vehicle by a human.

He likens optical/acoustic system possibilities to the world opened up by “your household wi-fi.”

Co-investigator Maurice Tivey of WHOI adds that “underwater optical communications is akin to the cell phone revolution…wireless communications. The ability to transfer information and data underwater without wires or plugging cables in is a tremendous capability allowing vehicles or ships to communicate with sensors on the seafloor.

“While acoustic communications has been the method of choice in the past it is limited by bandwidth and the bulkiness of transducers,” Tivey says. “Today, sensors sample at higher rates and can store lots of data and so we need to be able to download that data more efficiently. Optical communications allows us to transfer large data sets, like seismic data or tides or hydrothermal vent variations, in a time-efficient manner.”

When the vehicle goes out of optical range, it will still be within acoustic range, the researchers said.

Because it enables communications without the heavy tether-handling equipment required for an ROV, the optical/acoustic system promises to require smaller, less-expensive ships and fewer personnel to perform undersea missions, Farr said.

This July, WHOI plans the first large-scale deployment of the system at the Juan de Fuca Ridge off shore of the Northwestern United States. The WHOI team will employ the human occupied vehicle (HOV) Alvin to deploy the optical system on a sub sea data concentrator to collect and transmit geophysical data from wellheads situated at the undersea ridge.

Ultimately, Farr says, the system will “allow us to have vehicles [at specific undersea locations] waiting to respond to an event. It’s a game-changer.”

WHOI scientists collaborating on the research with Farr—who is in the Applied Ocean Physics and Engineering (AOPE) department—and Tivey, chair of the Geology and Geophysics department, are Jonathan Ware, AOPE senior engineer, Clifford Pontbriand, AOPE engineer, and Jim Preisig, AOPE associate scientist.

The work was funded by the National Science Foundation’s Division of Ocean Sciences.

Source | Physorg 

Triumph of the Cyborg Composer

Friday, February 26th, 2010

David Cope’s software creates beautiful, original music. Why are people so angry about that?

resized432x299mmw_composer_main_0310.jpgThe office looks like the aftermath of a surrealistic earthquake, as if David Cope’s brain has spewed out decades of memories all over the carpet, the door, the walls, even the ceiling. Books and papers, music scores and magazines are all strewn about in ragged piles. A semi-functional Apple Power Mac 7500 (discontinued April 1, 1996) sits in the corner, its lemon-lime monitor buzzing. Drawings filled with concepts for a never-constructed musical-radio-space telescope dominate half of one wall. Russian dolls and an exercise bike, not to mention random pieces from homemade board games, peek out from the intellectual rubble. Above, something like 200 sets of wind chimes from around the world hang, ringing oddly congruent melodies.

And in the center, the old University of California, Santa Cruz, emeritus professor reclines in his desk chair, black socks pulled up over his pants cuffs, a thin mustache and thick beard lending him the look of an Amish grandfather.

It was here, half a dozen years ago, that Cope put Emmy to sleep. She was just a software program, a jumble of code he’d originally dubbed Experiments in Musical Intelligence (EMI, hence “Emmy”). Still — though Cope struggles not to anthropomorphize her — he speaks of Emmy wistfully, as if she were a deceased child.

Emmy was once the world’s most advanced artificially intelligent composer, and because he’d managed to breathe a sort of life into her, he became a modern-day musical Dr. Frankenstein. She produced thousands of scores in the style of classical heavyweights, scores so impressive that classical music scholars failed to identify them as computer-created. Cope attracted praise from musicians and computer scientists, but his creation raised troubling questions: If a machine could write a Mozart sonata every bit as good as the originals, then what was so special about Mozart? And was there really any soul behind the great works, or were Beethoven and his ilk just clever mathematical manipulators of notes?

Cope’s answers — not much, and yes — made some people very angry. He was so often criticized for these views that colleagues nicknamed him “The Tin Man,” after the Wizard of Oz character without a heart. For a time, such condemnation fueled his creativity, but eventually, after years of hemming and hawing, Cope dragged Emmy into the trash folder.

mmw_composer4_0310.jpgThis month, he is scheduled to unveil the results of a successor effort that’s already generating the controversy and high expectations that Emmy once drew. Dubbed “Emily Howell,” the daughter program aims to do what many said Emmy couldn’t: create original, modern music. Its compositions are innovative, unique and — according to some in the small community of listeners who’ve heard them performed live — superb.

With Emily Howell, Cope is, once again, challenging the assumptions of artists and philosophers, exposing revered composers as unknowing plagiarists and opening the door to a world of creative machines good enough to compete with human artists. But even Cope still wonders whether his decades of innovative, thought-provoking research have brought him any closer to his ultimate goal: composing an immortal, life-changing piece of music.

Cope’s earliest memory is looking up at the underside of a grand piano as his mother played. He began lessons at the age of 2, eventually picking up the cello and a range of other instruments, even building a few himself. The Cope family often played “the game” — his mother would put on a classical record, and the children would try to divine the period, the style, the composer and the name of works they’d read about but hadn’t heard. The music of masters like Rachmaninov and Stravinsky instilled in him a sense of awe and wonder.

Nothing, though, affected Cope like Tchaikovsky’s Romeo and Juliet, which he first heard around age 12. Its unconventional chord changes and awesome Sturm und Drang sound gave him goose bumps. From then on, he had only one goal: writing a piece that some day, somewhere, would move some child the same way Tchaikovsky moved him. “That, just simply, was the orgasm of my life,” Cope says.

He begged his parents to pay for the score, brought it home and translated it to piano; he studied intensely and bought theory books, divining, scientifically, what made it work. It was then he knew he had to become a composer.

Cope sailed through music schooling at Arizona State University and the University of Southern California, and by the mid-1970s, he had settled into a tenured position at Miami University of Ohio’s prestigious music department. His compositions were performed in Carnegie Hall and The Kennedy Center for the Performing Arts, and internationally from Lima, Peru, to Bialystok, Poland. He built a notable electronic music studio and toured the country, wowing academics with demonstrations of the then-new synthesizer. He was among the foremost academic authorities on the experimental compositions of the 1960s, a period during which a fired-up jet engine and sounds derived from placing electrodes on plants were considered music.

mmw_composer_03101-300x207.jpgWhen Cope moved to UC Santa Cruz in 1977 to take a position in its music department, he could’ve put his career on autopilot and been remembered as a composer and author. Instead, a brutal case of composer’s block sent him on a different path.

In 1980, Cope was commissioned to write an opera. At the time, he and his wife, Mary (also a Santa Cruz music faculty member), were supporting four children, and they’d quickly spent the commission money on household essentials like food and clothes. But no matter what he tried, the right notes just wouldn’t come. He felt he’d lost all ability to make aesthetic judgments. Terrified and desperate, Cope turned to computers.

Along with his work on synthesis, or using machines to create sounds, Cope had dabbled in the use of software to compose music. Inspired by the field of artificial intelligence, he thought there might be a way to create a virtual David Cope software to create new pieces in his style.

The effort fit into a long tradition of what would come to be called algorithmic composition. Algorithmic composers use a list of instructions — as opposed to sheer inspiration — to create their works. During the 18th century, Joseph Haydn and others created scores for a musical dice game called Musikalisches Würfelspiel, in which players rolled dice to determine which of 272 measures of music would be played in a certain order. More recently, 1950s-era University of Illinois researchers Lejaren Hiller and Leonard Isaacson programmed stylistic parameters into the Illiac computer to create the Illiac Suite, and Greek composer Iannis Xenakis used probability equations. Much of modern popular music is a sort of algorithm, with improvisation (think guitar solos) over the constraints of simple, prescribed chord structures.

Few of Cope’s major works, save a dalliance with Navajo-style compositions, had strayed far from classical music, so he wasn’t a likely candidate to rely on software to write. But he did have an engineer’s mind, composing using note-card outlines and a level of planning that’s rare among free-spirited musicians. He even claims to have created his first algorithmic composition in 1955, instigated by the singing of wind over guide wires on a radio tower.

Cope emptied Santa Cruz’s libraries of books on artificial intelligence, sat in on classes and slowly learned to program. He built simple rules-based software to replicate his own taste, but it didn’t take long before he realized the task was too difficult. He turned to a more realistic challenge: writing chorales (four-part vocal hymns) in the style of Johann Sebastian Bach, a childhood favorite. After a year’s work, his program could compose chorales at the level of a C-student college sophomore. It was correctly following the rules, smoothly connecting chords, but it lacked vibrancy. As AI software, it was a minor triumph. As a method of producing creative music, it was awful.

Cope wrestled with the problem for months, almost giving up several times. And then one day, on the way to the drug store, Cope remembered that Bach wasn’t a machine — once in a while, he broke his rules for the sake of aesthetics. The program didn’t break any rules; Cope hadn’t asked it to.

The best way to replicate Bach’s process was for the software to derive his rules — both the standard techniques and the behavior of breaking them. Cope spent months converting 300 Bach chorales into a database, note by note. Then he wrote a program that segmented the bits into digital objects and reassembled them the way Bach tended to put them together.

The results were a great improvement. Yet as Cope tested the recombinating software on Bach, he noticed that the music would often wander and lacked an overall logic. More important, the output seemed to be missing some ineffable essence.

Again, Cope hit the books, hoping to discover research into what that something was. For hundreds of years, musicologists had analyzed the rules of composition at a superficial level. Yet few had explored the details of musical style; their descriptions of terms like “dynamic,” for example, were so vague as to be unprogrammable. So Cope developed his own types of musical phenomena to capture each composer’s tendencies — for instance, how often a series of notes shows up, or how a series may signal a change in key. He also classified chords, phrases and entire sections of a piece based on his own grammar of musical storytelling and tension and release: statement, preparation, extension, antecedent, consequent. The system is analogous to examining the way a piece of writing functions. For example, a word may be a noun in preparation for a verb, within a sentence meant to be a declarative statement, within a paragraph that’s a consequent near the conclusion of a piece.

Finally, Cope’s program could divine what made Bach sound like Bach and create music in that style. It broke rules just as Bach had broken them, and made the result sound musical. It was as if the software had somehow captured Bach’s spirit — and it performed just as well in producing new Mozart compositions and Shakespeare sonnets. One afternoon, a few years after he’d begun work on Emmy, Cope clicked a button and went out for a sandwich, and she spit out 5,000 beautiful, artificial Bach chorales, work that would’ve taken him several lifetimes to produce by hand.

When Emmy’s Bach pieces were first performed, at the University of Illinois at Urbana-Champaign in 1987, they were met with stunned silence. Two years later, a series of performances at the Santa Cruz Baroque Festival was panned by a music critic — two weeks before the performance. When Cope played “the game” in front of an audience, asking which pieces were real Bach and which were Emmy-written Bach, most people couldn’t tell the difference. Many were angry; few understood the point of the exercise.

Cope tried to get Emmy a recording contract, but classical record companies said, “We don’t do contemporary music,” and contemporary record companies said the opposite. When he finally did land a deal, no musician would play the music. He had to record it with a Disklavier (a modern player piano), a process so taxing he nearly suffered a nervous breakdown.

Though musicians and composers were often skeptical, Cope soon attracted worldwide notice, especially from scientists interested in artificial intelligence and the small, promising field called artificial creativity. Other “AC” researchers have written programs that paint pictures; that tell Mexican folk tales or write detective novels; and that come up with funny jokes. They have varying goals, though most seek to better understand human creativity by modeling it in a machine.

To many in the AC community, including the University of Sussex’s Margaret Boden, doyenne of the field, Emmy was an incredible accomplishment. There’s a test, named for World War II-era British computer scientist Alan Turing, that’s a simple check for so-called artificial intelligence: whether or not a person interacting with a machine and a human can tell the difference. Given its success in “the game,” it could be argued that Emmy passed the Turing Test.

Cope had taken an unconventional approach. Many artificial creativity programs use a more sophisticated version of the method Cope first tried with Bach. It’s called intelligent misuse — they program sets of rules, and then let the computer introduce randomness. Cope, however, had stumbled upon a different way of understanding creativity.

In his view, all music — and, really, any creative pursuit — is largely based on previously created works. Call it standing on the shoulders of giants; call it plagiarism. Everything we create is just a product of recombination.

In Cope’s fascinating hovel of a home office on a Wednesday afternoon, I ask him how exactly he knows that’s true. Just because he built a program that can write music using his model, how can he be so certain that that’s the way man creates?

Cope offers a simple thought experiment: Put aside the idea that humans are spiritually and creatively endowed, because we’ll probably never fully be able to understand that. Just look at the zillions of pieces of music out there.

“Where are they going to come up with sounds that they themselves create without hearing them first?” he asks. “If they’re hearing them for the first time, what’s the author of them? Is it birds, is it airplane sounds?”

Of course, some composers probably have taken dictation from birds. Yet the most likely explanation, Cope believes, is that music comes from other works composers have heard, which they slice and dice subconsciously and piece together in novel ways. How else could a style like classical music last over three or four centuries?

To prove his point, Cope has even reverse-engineered works by famous composers, tracing the tropes, phrases and ideas back to compositions by their forebears.

“Nobody’s original,” Cope says. “We are what we eat, and in music, we are what we hear. What we do is look through history and listen to music. Everybody copies from everybody. The skill is in how large a fragment you choose to copy and how elegantly you can put them together.”

Cope’s claims, taken to their logical conclusions, disturb a lot of people. One of them is Douglas Hofstadter, a Pulitzer Prize-winning cognitive scientist at Indiana University and a reluctant champion of Cope’s work. As Hofstadter has recounted in dozens of lectures around the globe during the past two decades, Emmy really scares him.

Like many arts aficionados, Hofstadter views music as a fundamental way for humans to communicate profound emotional information. Machines, no matter how sophisticated their mathematical abilities, should not be able to possess that spiritual power. As he wrote in Virtual Music, an anthology of debates about Cope’s research, Hofstadter worries Emmy proves that “things that touch me at my deepest core — pieces of music most of all, which I have always taken as direct soul-to-soul messages — might be effectively produced by mechanisms thousands if not millions of times simpler than the intricate biological machinery that gives rise to a human soul.”

I ask Cope whether Emmy bothers him. This is a man who averages about four daily hours of hardcore music listening, who’s touched so deeply by a handful of notes on the piano as to shut his eyes in reverie.

“I can understand why it’s an issue if you’ve got an extremely romanticized view of what art is,” he says. “But Bach peed, and he shat, and he had a lot of kids. We’re all just people.”

mmw_composer3_03101.jpgAs Cope sees it, Bach merely had an extraordinary ability to manipulate notes in a way that made people who heard his music have intense emotional reactions. He describes his sometimes flabbergasting conversations with Hofstadter: “I’d pull down a score and say, ‘Look at this. What’s on this page?’ And he’d say, ‘That’s Beethoven, that’s music of great spirit and great soul.’ And I’d say, ‘Wow, isn’t that incredible! To me, it’s a bunch of black dots and black lines on white paper! Where’s the soul in there?’”

Cope thinks the old cliché of beauty in the eye of the beholder explains the situation well: “The dots and lines on paper are merely triggers that set things off in our mind, do all the wonderful things that give us excitement and love of the music, and we falsely believe that somewhere in that music is the thing we’re feeling,” he says. “I don’t know what the hell ’soul’ is. I don’t know that we have any of it. I’m looking to get off on life. And music gets me off a lot of the time. I really, really, really am moved by it. I don’t care who wrote it.”

He does, of course, see Emmy as a success. He just thinks of her as a tool. Everything Emmy created, she created because of software he devised. If Cope had infinite time, he could have written 5,000 Bach-style chorales. The program just did it much faster.

“All the computer is is just an extension of me,” Cope says. “They’re nothing but wonderfully organized shovels. I wouldn’t give credit to the shovel for digging the hole. Would you?”

listen Sample of Emily Howell — Track 1

listen Sample of Emily Howell — Track 2

Cope has a complex relationship with his critics, and with people like Hofstadter who are simultaneously awed and disturbed by his work. He denounces some as focused on the wrong issues. He describes others as racists, prejudiced against all music created by a computer. Yet he thrives on the controversy. If not for the harsh reaction to the early Bach chorales, Cope says, he probably would have abandoned the project. Instead, he decided to “ram Emmy down their throats,” recording five more albums of the software’s compositions, including an ambitious Rachmaninov concerto that nearly led to another nervous breakdown from lack of sleep and overwork.

For the next decade, he fed off the anger and confusion and kudos from colleagues and admirers. Years after the 1981 opera was to be completed, Cope fed a database of his own works into Emmy. The resulting score was performed to the best reviews of his life. Emmy’s principles of recombination and pattern recognition were adapted by architects and stock traders, and Cope experienced a brief burst of fame in the late 1990s, when The New York Times and a handful of other publications highlighted his work. Insights from Emmy percolated the literature of musical style and creativity — particularly Emmy’s proof-by-example that a common grammar and language underlie almost all music, from Asian to Western classical styles. Eleanor Selfridge-Field, senior researcher at Stanford University’s Center for Computer Assisted Research in the Humanities, likens Cope’s discoveries to the findings from molecular biology that altered the field of biology.

“He has revealed a lot of essential elements of musical style, and the definition of musical works, and of individual contributions to the evolution of music, that simply haven’t been made evident by any other process,” she says. “That really is an important contribution to our understanding of music, revealing some things that are really worth knowing.”

Nevertheless, by 2004, Cope had received too many calls from well-known musicians who wanted to perform Emmy’s compositions but felt her works weren’t “special” enough. He’d produced more than 1,000 in the style of several composers, an endless spigot of material that rendered each one almost commonplace. He feared his Emmy work made him another Vivaldi, the famous composer often criticized for writing the same pieces over and over again. Cope, too, felt Emmy had cheated him out of years of productivity as a composer.

“I knew that, eventually, Emmy was going to have to die,” he says. During the course of weeks, Cope found every copy of the many databases that comprised Emmy and trashed them. He saved a slice of the data and the Emmy program itself, so he could demonstrate it for academic purposes, and he saved the scores she wrote, so others could play them. But he’d never use Emmy to write again. She was gone.

For years, Cope had been experimenting with a different kind of virtual composer. Instead of software based on re-creation, he hoped to build something with its own personality.

This program would write music in an odd sort of way. Instead of spitting out a full score, it converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as “good,” others as “bad.” Eventually, the exchange produces a score, either in sections or as one long piece.

Most of the scores Cope fed in came from Emmy, the once-removed music from history’s great composers. The results, however, sound nothing like Emmy or her forebears. “If you stick Mozart with Joplin, they’re both tonal, but the output,” Cope says, “is going to sound like something rather different.”

Because the software was Emmy’s “daughter” — and because he wanted to mess with his detractors — Cope gave it the human-sounding name Emily Howell. With Cope’s help, Emily Howell has written three original opuses of varying length and style, with another trio in development. Although the first recordings won’t be released until February, reactions to live performances and rough cuts have been mixed. One listener compared an Emily Howell work to Stravinsky; others (most of whom have heard only short excerpts online) continue to attack the very idea of computer composition, with fierce debates breaking out in Internet forums around the world.

At one Santa Cruz concert, the program notes neglected to mention that Emily Howell wasn’t a human being, and a chemistry professor and music aficionado in the audience described the performance of a Howell composition as one of the most moving experiences of his musical life. Six months later, when the same professor attended a lecture of Cope’s on Emily Howell and heard the same concert played from a recording, Cope remembers him saying, “You know, that’s pretty music, but I could tell absolutely, immediately that it was computer-composed. There’s no heart or soul or depth to the piece.”

That sentiment — present in many recent articles, blog posts and comments about Emily Howell — frustrates Cope. “Most of what I’ve heard [and read] is the same old crap,” he complains. “It’s all about machines versus humans, and ‘aren’t you taking away the last little thing we have left that we can call unique to human beings — creativity?’ I just find this so laborious and uncreative.”

Emily Howell isn’t stealing creativity from people, he says. It’s just expressing itself. Cope claims it produced musical ideas he never would have thought about. He’s now convinced that, in many ways, machines can be more creative than people. They’re able to introduce random notions and reassemble old elements in new ways, without any of the hang-ups or preconceptions of humanity.

“We are so damned biased, even those of us who spend all our lives attempting not to be biased. Just the mere fact that when we like the taste of something, we tend to eat it more than we should. We have our physical body telling us things, and we can’t intellectually govern it the way we’d like to,” he says.

In other words, humans are more robotic than machines. “The question,” Cope says, “isn’t whether computers have a soul, but whether humans have a soul.”

Cope hopes such queries will attract more composers to give his research another chance. “One of the criticisms composers had of Emmy was: Why the hell was I doing it? What’s the point of creating more music, supposedly in the style of composers who are dead? They couldn’t understand why I was wasting my time doing this,” Cope says.

That’s already changed.

“They’re seeing this now as competition for themselves. They see it as, ‘These works are now in a style we can identify as current, as something that is serious and unique and possibly competitive to our own work,’” Cope says. “If you can compose works fast that are good and that the audience likes, then this is something.”

I ask Cope whether he’s actually heard well-known composers say they feel threatened by Emily Howell.

“Not yet,” he tells me. “The record hasn’t come out.”

The following afternoon, we walk into Cope’s campus office, which seems like another college dorm room/psychic dump, with stacks of compact discs and scores growing from the floor like stalagmites, and empty plastic juice bottles scattered about. The one thing that looks brand-new is the black upright piano against the near wall.

Cope pulls up a chair, removes his Indiana Jones hat and eagerly explains the latest phase of his explorations into musical intelligence. Though he’s still poking around with Emily Howell, he’s now spending the bulk of his composition time employing on-the-fly programs.

Here’s how this cyborg-esque composing technique works: Cope comes up with an idea. For instance, he’ll want to have five voices, each of which alternates singing groups of four notes. Or perhaps he’ll want to write a piece that moves quickly from the bottom of the piano keyboard to the top, and then back down. He’ll rapidly code a program to create a chunk of music that follows those directions.

After working with Emmy and Emily Howell for nearly 30 years and composing for about twice that many, Cope is fast enough to hear something in his head in the bathtub, dry off and get dressed, move to the computer and 10 minutes later have a whole movement of 100 measures ready. It may not be any good, but it’s the fastest way to translate his thoughts into a solid rough draft.

“I listen with creative ears, and I hear the music that I want to hear and say, ‘You know? That’s going to be fabulous,’ or ‘You know … ‘” — he makes a spitting noise — “‘in the toilet.’ And I haven’t lost much, even though I’ve got a whole piece that’s in notation immediately.”

He compares the process to a sculptor who chops raw shapes out of a block of marble before he teases out the details. Using quick-and-dirty programs as an extension of his brain has made him extraordinarily prolific. It’s a process close to what he was hoping for back when he first started working on software to save him from composer’s block.

As complex as Cope’s current method is, he believes it heralds the future of a new kind of musical creation: armies of computers composing (or helping people compose) original scores.

“I think it’s going to happen,” Cope says. “I don’t believe that composers are stupid people. Ultimately, they’re going to use any tool at their disposal to get what they’re after, which is, after all, good music they themselves like to listen to. There will be initial withdrawal, but eventually it’s going to happen — whether we want it to or not.”

Already, at least one prominent pop group — he’s signed a confidentiality agreement, so he can’t say which one — asked him to use software to help them write new songs. He also points to services like Pandora, which uses algorithms to suggest new music to listeners.

If Cope’s vision does come true, it won’t be due to any publicity efforts on his part. He’ll answer questions from anyone, but he refuses to proactively promote his ideas. He still hasn’t told most of his colleagues or close friends about Tinman, a memoir he clandestinely published last year. The attitude, which he settled on at a young age, is to “treat myself as if I’m dead,” so he won’t affect how his work is received. “If you have to promote it to get people to like it,” he asks, “then what have you really achieved?”

Cope has sold tens of thousands of books, had his works performed in prestigious venues and taught many students who evangelize his ideas around the world. Yet he doesn’t think it adds up to much. All he ever wanted was to write something truly wonderful, and he doesn’t think that’s happened yet. As a composer, Cope laments, he remains a “frustrated loser,” confused by the fact that he burned so much time on a project that stole him away from composing. He still just wants to create that one piece that changes someone’s life — it doesn’t matter whether it’s composed by one of his programs, or in collaboration with a machine, or with pencil on a sheet of paper.

“I want that little boy or girl to have access to my music so they can play it and get the same thrill I got when I was a kid,” he says. “And if that isn’t gonna happen, then I’ve completely failed.”

Source | Miller Mccune

Just Like Mombot Used to Make

Friday, February 26th, 2010

24robots_ca1-articlelarge.jpgIN an empty fluorescent-lighted hallway on the second floor of Smith Hall here at Carnegie Mellon University, Prof. Paul Rybski and a pair of graduate students showed off their most advanced creation.

The culmination of two years of research and the collective expertise of 17 faculty members, undergraduates and doctoral students in the Human Robot Interaction Group, it is a robot outfitted with a $20,000 laser navigation system, sonar sensors and a Point Grey Bumblebee 2 stereo camera that functions as its eyes, which stare out from its clay-colored plastic, gender-neutral face.

With Dr. Rybski looking on like a proud parent, a bearded graduate student clacked away at a laptop on a roving service cart, and the robot rolled forward to fulfill its primary function: the delivery of one foil-wrapped Nature Valley trail-mix flavor granola bar.

24robots_ca0-articleinline.jpg“Hello, I’m the Snackbot,” it said in a voice not unlike that of HAL 9000, from “2001: A Space Odyssey,” as its rectangular LED “mouth” pulsated to form the words. “I’ve come to deliver snacks to Ian. Is Ian here?”

I responded affirmatively. “Oh, hello, Ian,” it said. “Here is your order. I believe it was a granola bar, right?”

Yes, it was. “All right, go ahead and take your snack. I’m sure it would be good, but I wouldn’t know. I prefer a snack of electricity.”

Designed to gather information on how robots interact with people (and how to improve homo-robo relations), the Snackbot has been carefully considered for maximum approachability in every detail, from its height to its color. The snack, not surprisingly, is the central component of that approachability.

“We figured, what better way to get people to interact with a robot than have something that offers them food?” Dr. Rybski said.

The Snackbot is but one soldier in a veritable army of new robots designed to serve and cook food and, in the process, act as good-will ambassadors, and salesmen, for a more automated future.

In 2006, after four years of research and more than a quarter-million-dollar investment, Fanxing Science and Technology, a company in Shenzhen, China, unveiled what was called the “world’s first cooking robot” — AIC-AI Cooking Robot — able, at the touch of a button, to fry, bake, boil and steam its way through thousands of Chinese delicacies from at least three culinary regions.

243robots-3-articleinline.jpgAIC-AI needs a special stove for cooking, but many of the mechanized culinary wizards developed since then can work on almost any kind of stove, as long as the robot is either shown ahead of time how a particular stove works or the stove’s characteristics are programmed into the robot’s software.

In 2008, scientists at the Learning Algorithms and Systems Laboratory in Lausanne, Switzerland, came out with one such teachable chef, the Chief Cook Robot, which can make omelets (ham and Gruyère were in its first) and bears a resemblance to the Pillsbury Doughboy. That same year, at the Osaka Museum of Creative Industries in Japan, a programmable robot began preparing takoyaki (octopus balls) from scratch, a chef’s bandana wrapped jauntily around its upper module.

Last June, at the International Food Machinery and Technology Expo in Tokyo, a broad-shouldered Motoman SDA-10 robot with spatulas for arms made okonomiyaki (savory pancakes) for attendees; another robot grabbed sushi with an eerily realistic hand; and still another, the Dynamizer, sliced cucumbers at inhumanly fast speeds and occasionally complained about being tired and wanting to go home.

Then, a month later in Nagoya, Japan, the Famen restaurant opened, with two giant yellow robot arms preparing up to 800 bowls of ramen a day. When it’s slow, the robots act out a scripted comedy routine and spar with knives.

“The concept of this restaurant is that Robot No. 1 is the manager, which boils the noodles, and Robot No. 2 is the deputy manager, which prepares for soup and puts toppings,” said Famen’s owner, Kenji Nagaya. “Human staffs are working for the two robots.”

In the throes of an economic downturn, with unemployment rates mounting, the very idea of a robot chef might seem indulgent at best — at worst, downright offensive. But these robots aren’t likely to be running the grill stations or bringing you chowder anytime soon — and the bad economy might be part of the reason. At $100,000 a pair, Mr. Nagaya said, the cost of his robots is “too high to make bowls of ramen.”

But they may be worth the cost at Mr. Nagaya’s other workplace, the robotics company Aisei in Nagoya, where he is the president. “I have made and programmed industrial robots at our company so long, and I was thinking to set up a place to promote our business,” he said. “I love ramen a lot, and ramen restaurants are always featured in magazines and on television in Japan, so I thought opening a ramen shop with robots would have a huge impact on promoting our business.”

Mikio Shimizu, the president of Squse, a company in Kyoto, Japan, that is responsible for the sushi-grabbing hand, said that his ultimate goal is to become the world’s largest maker of functional prosthetic hands.

Narito Hosomi, the president of Toyo Riki, a company in Osaka, Japan, that programs the robots responsible for the octopus balls and savory pancakes, said that the final destination for the robots, which cost $200,000 each, was more likely a factory than a kitchen.

But “it’s not interesting to watch robots welding,” Dr. Hosomi said. “If you see robots do the same work as you do in everyday life with the tools you use, it would be easier to understand the functional capability of robots. The okonomiyaki robot is a medium for that purpose. We say a robot can make okonomiyaki, takoyaki — well, what would you like a robot to do for you?”

While cooking is certainly a more universal way to showcase a robot’s abilities than, say, laser-welding, it is also unique in its ability to tackle something deeper: namely, the public’s collective “Terminator”-fueled angst over a future populated by vengeful hominoid machines.

Dr. Heather Knight, a roboticist at the NASA Jet Propulsion Laboratory, said that the industry is trying to change “the perception of robots.”

“The Japanese have always been more comfortable with it, but particularly in the West, there’s this whole Frankenstein thing that if we try to make something in the image of man, to make a new creature, we’re stealing the role of God, and it’s going to turn out wrong because that’s not our role,” she said. “So how do you change this perception that robots are going to be way too intelligent and destroy us? One of the fastest ways to people’s hearts is food, right? Any girlfriend or wife would say that.”

In fact, Dr. Aude Billard, whose team designed the egg-handling Chief Cook Robot, said that she decided on omelets because “it was the first dish my partner cooked for me.” The omelet making was meant to show how a robot could be “taught” to accomplish complex tasks. It was also “something that all the guys in the lab knew sufficiently well to be able to train the robot,” she said.

But perhaps the biggest accomplishment of this new wave of sustenance-bearing machines is their departure from what defined their predecessors. The Fritz Lang level of efficiency normally associated with robots is notably absent — and that’s no accident.

“A simple rule of robotic personality seems to be: don’t make things the most efficient way,” said Magnus Wurzer, who has been running the Vienna-based Roboexotica, a festival where scientists have gone to build, showcase and discuss “cocktail robots” since 1999.

One entry, Beerbot, detects approaching people and asks for beer money. When it acquires enough, it “buys” itself a beer. Bystanders can watch it flow into a transparent bladder. As for other humanizing behaviors, “like a robot that doesn’t stop short at lighting a cigarette but actually goes ahead and smokes it?” Mr. Wurzer says, “We had that.”

Roboexotica has inspired a stateside version as well, which just had its third annual celebration in San Francisco.

And in at least one case in Europe, a robot actually got behind a bar. From 1999 to 2002, a scarlet-eyed metal robot named Cynthia poured drinks at Cynthia’s Bridge Bar and Lounge in London. But according to Mr. Wurzer, “she was too costly to maintain once the bar was sold by the robot’s maker.”

One reviewer at virtual-london.com, a travel-information Web site, said that Cynthia’s problems went deeper: “She whirls into action, pouring drinks to perfection, mixing them, recounting awful jokes and chuckling to herself while frightened customers feel grateful she’s not allowed out from behind the bar.”

However hard it may be to master, humanizing behavior was what the Snackbot’s creator were seeking, too.

“How do you get a service robot to interact with humans?” Dr. Rybski asked. “That’s a real hard problem. It’s different when you’re working with a human versus a pipe on an assembly line.”

To prepare, one of Dr. Rybski’s graduate students, a slender and quiet Korean woman named Min Kyung Lee, spent two days staked out behind a campus hot dog vendor, taking notes on how he interacted with his customers. She used what she learned to program the robot’s dialogue.

Aside from the obvious challenges of instilling a machine with personality is the other, long-held axiom in the world of robotics: what might seem second-nature to humans can be all but impossible to teach a machine.

Mr. Wurzer said that one scientist at Roboexotica built a robot solely dedicated to the preparation of mojitos — “with the grinding and stomping and all.” And yet the most challenging task for all the robots, he said, was probably the one thing that no human bartender ever botches: handling the ice.

The Chief Cook Robot still relies on human beings to crack the eggs — the shells are far too delicate for its metal hands. The okonomiyaki-making robot still needs the vegetables prepped, a task arguably better suited to a robot.

And while robots could certainly be developed and trained for these tasks, some culinary arts are so delicate and ancient — so venerated and sanctified — that even these machines’ creators wouldn’t trust them to inhuman hands.

“Would you like to have a robot hand that makes sushi?” said Mr. Shimizu, of Squse, which programmed the sushi-grabbing hand. “Do you really want it? For making good sushi, a robot never can beat a human professional sushi chef. A robot never can go beyond a human’s skill or human intelligence.”

But the real obstacle to a world full of mechanized sous-chefs and simulated rage-filled robo-Gordon Ramsays may be something much harder to fake: none of these robots can taste.

Keizo Shimamoto, who writes a blog on ramen noodles and has eaten at Famen, the two-robot Japanese restaurant, said that the establishment was “kind of dead” when he ate there last year. Though the owner said that people do taste the food, according to Mr. Shimamoto, “It was a little disappointing.”

It’s one thing to get people to stop by to see the robots. “But to keep the customers coming back,” he said, “you need better soup.”




Source | New York Times

The real Avatar: ocean bacteria act as ‘superorganism’

Friday, February 26th, 2010

IN THE movie Avatar, the Na’vi people of Pandora plug themselves into a network that links all elements of the biosphere, from phosphorescent plants to pterodactyl-like birds. It turns out that Pandora’s interconnected ecosystem may have a parallel back on Earth: sulphur-eating bacteria that live in muddy sediments beneath the sea floor.

Some researchers believe that bacteria in ocean sediments are connected by a network of microbial nanowires. These fine protein filaments could shuttle electrons back and forth, allowing communities of bacteria to act as one super-organism. Now Lars Peter Nielsen of Aarhus University in Denmark and his team have found tantalising evidence to support this controversial theory.

“The discovery has been almost magic,” says Nielsen. “It goes against everything we have learned so far. Micro-organisms can live in electric symbiosis across great distances. Our understanding of what their life is like, what they can and can’t do – these are all things we have to think of in a different way now.”

Many marine bacteria generate energy by oxidising the gas hydrogen sulphide, which is common in ocean sediments. To do this, the bugs need access to the oxygen in seawater to carry away the electrons produced as the sulphide is broken down.

Nielsen and his team took samples of bacteria-laced sediment from the sea floor close to Aarhus. In the lab, they first removed and then replaced the oxygen in the seawater above the samples. To their surprise, measurements of hydrogen sulphide revealed that bacteria several centimetres from the surface started breaking down the gas long before the reintroduced oxygen had diffused down to them (Nature, DOI: 10.1038/nature08790).

Nielsen believes a network of conductive protein wires between the bacteria makes this possible, allowing the oxidation reaction to happen remotely from the oxygen that sustains it. The wires transport electrons from bacteria in deeper, oxygen-poor sediments to bacteria in oxygen-rich mud near the surface. There, they are offloaded onto the oxygen, completing the reaction (see diagram). Nielsen calls the process “electrical symbiosis”.

Other evidence backs up this idea. For years, geochemists have known that microbes generate a weak current in the seabed – a process several groups are using to build microbial fuel cells. “But people have been focused on making power. They’ve left behind the question of what’s going on in nature and why bacteria might have this ability to exchange electrons,” says Nielsen.

“These are very encouraging and exciting results,” says biogeochemist Yuri Gorby of the J. Craig Venter Institute in San Diego, California. He adds that while Nielsen’s results are “highly suggestive” of electrical symbiosis, “we must be careful not to extend conclusions beyond what are scientifically proven”. Nanowires have been spotted in the lab by Gorby and other teams, but not in natural sediment. Nielsen, who plans to search for the wires in natural settings, says it is needle-in-a-haystack work.

As for the Avatar similarities, Nielsen says “we have no indication that more advanced information is exchanged in the network”, but admits the parallels are striking.

Source | New Scientist

Can avatars change the way we think and act? (w/ Video)

Friday, February 26th, 2010

If you saw a digital image of yourself running on a virtual treadmill, would you feel like going to the gym? Probably so, according to a Stanford study showing that personalized avatars can motivate people to exercise and eat right.





Moreover, you are more likely to imitate the behavior of an avatar in real life if it looks like you, said Jesse Fox, a doctoral candidate in the Communication Department and a researcher at the Stanford Virtual Lab. In her study, she used digital photographs of participants to create personalized avatar bodies, a service some game companies offer today.

To escape to the virtual realm, you simply slip on a helmet with screens attached in front of the eyes. You are instantly immersed in a digital room and fully surrounded by a new world, as if you are inside a . Cameras in the lab track an infrared light on your helmet so that images on the screen move with your head.

Participants respond to avatars that look like them

In Fox’s first test, some participants put on the helmet and saw their avatar running on a treadmill. Others saw themselves loitering in the virtual room or saw a running avatar they didn’t recognize.

Fox contacted participants a day after the study and found that the people who saw their own avatar running were more likely to exercise (after they left the lab) than the people who saw someone else running or saw themselves just hanging out in the virtual room. In fact, those who watched themselves running were motivated to exercise, on average, a full hour more than the others. They ran, played soccer or worked out at the gym.

“They had imitated their avatar’s behavior,” Fox said.

In another test, some participants ran in place while watching their avatars become thinner, other participants stood still and watched their avatars become heavier, and others saw an unfamiliar avatar either slim or fatten. Participants who had witnessed their own avatar change – whether becoming thinner or heavier – exercised significantly more than those who had seen an unfamiliar avatar.

Seeing their face on an avatar was the driving factor. “If they saw a person they didn’t know, they weren’t motivated to exercise. But if they saw themselves, they exercised significantly more,” she said.

Participants also responded to personalized avatars whose bodies slimmed as they ate carrots or grew heavier as they ate candy. Male participants mimicked the avatar and ate more candy, but because of the gender differences associated with eating, female participants ate less candy.

Fox thinks personalized avatars could be used to motivate healthy behavior. For example, someone on a long-term weight loss schedule could pull out his or her cell phone and track progress by watching the avatar body slim down onscreen.

Female avatars change participants’ view of women

In a separate study, Fox tested the influence of avatars on attitudes and views toward women. She showed participants two types of female avatars: a suggestively dressed woman in revealing clothing and a conservatively dressed woman in blue jeans and a jacket. Both types of avatars demonstrated either dominant behavior such as staring at the participant or submissive behavior such as staring at the floor and cowering.

Both male and female participants exposed to the suggestive avatar showed higher rape myth acceptance when answering a questionnaire afterward. This is the view that women deserve to be raped if, for example, they wear suggestive clothing or are out alone at night. These participants were also more likely to agree with statements such as “women seek to gain power by getting control over men” and “women are too easily offended.” Even when Fox ran a similar test with women whose own faces appeared on the sexualized avatars, participants still showed higher rape myth acceptance.

Video games almost always portray women in a stereotypical manner, Fox said. “If all it takes is five minutes of exposure in an immersive virtual world to one character, we really have to ask ourselves about exposures and interactions in video games like Grand Theft Auto,” Fox said. The female characters in Grand Theft Auto are often scantily clad victims of violence.

On the other hand, the influences of body image in the virtual world may also help women. For example, an anorexic woman with a poor self-image might embody a healthy-looking . She might become comfortable in her new body as she interacts with others in the virtual world and experiences acceptance and approval. Learning the benefits of being healthy may motivate her to adopt a healthy diet or seek help in real life.

After studying the influence of avatars, Fox is sure about one thing: the need for media literacy. “The bottom line is that we have to have more education in society, particularly showing students stereotypes that exist in media and why they exist.”

Source | Physorg

Ray Kurzweil about Singularity and Technology P1

Friday, February 26th, 2010

All About Space Travel, Time Travel, Quantum Tunneling & Zero-G Sex

Monday, February 22nd, 2010

Marc Millis, former head of NASA’s Breakthrough Propulsion Physics Project, has designed ion thrusters, electronics for rocket monitoring, cryogenic propellant equipment, and even a cockpit display to guide free-fall aircraft flights. His recent retirement after nearly 30 years with NASA has freed him to devote full time to his Tau Zero Foundation, “using the dream of reaching other worlds as both a long-range goal and a catalyst for near-term progress.”

Millis and h+ Editor-in-Chief RU Sirius met during separate TEDxBrussels 2009 presentations — Millis’ talk dealt with finding and reaching habitable worlds while RU, naturally, talked about transhumanism. Here’s a video of Millis’ TEDx presentation:





Millis’ book, Frontiers of Propulsion Science, is a compendium on where things are with the science of space drives, warp drives, gravity control, and faster-than-light travel. It’s really a graduate-level text for someone with a substantial math and physics background. He says that one of his hopes now that he’s retired from NASA is to make the time to write a public companion version of the book with more artwork and simpler language to help convey the concepts.

Known for his level-headed, peer-reviewed thinking on such complex topics as hyperspace, quantum tunneling, and space colonization, Millis took time out to speak with h+ on a variety of topics including Alpha Centari, space and time travel paradoxes, space sails, the future of NASA’s space programs, Mars, space elevators, scramjets, and orbiting hotels.

h+: During your TEDx presentation, you mentioned that “we’re 2-4 centuries away from having the technology to launch a mission to Alpha Centari” using conventional physics.

alpha-centauri.jpgMM: That estimate is based not so much on technology prowess, but on available energy. When you look at the pace of human energy production and consumption and how much of that energy is devoted to spaceflight — and you compare it to what it would take to undertake an interstellar mission — the calculations suggest that it will take 2-4 centuries until we can pull off a mission like that.

Those estimates could go one way or the other if any assumptions are varied. For example, the way I did the comparison of the energy devoted to spaceflight is that I took the space shuttle program and compared how much energy was used by the rocketry compared to the amount of energy being used by the entire Earth. However, if people decide that it’s important to put 10 times that amount of energy into the space program, then the estimate would show a decrease in the time until we’re able to launch a mission to Alpha Centari. My calculations were a way of putting bounds on the problem and defining where those bounds came from.

Relative to the technology, as a culture we’re so used to thinking how we can get “there” the quickest, or what’s the best single approach. When it comes to interstellar flight and learning to live beyond Earth, this thinking sidetracks us because we’re so far from fruition in our understanding of interstellar space options, that there’s no way for us to pick “the” one way. Instead, there are many different options and unknowns. We stand to gain a lot more from the attempt to understand them – chipping away at them rather than not doing anything at all. By researching the spectrum of possibilities, we’re likely to be better off in the near term.

I really want to change the paradigm of how we look at interstellar flight. It’s not just a matter of trying to get there quickly or to find “the best approach,” rather it’s finding the smartest things we can do today that set the stage for a more productive future. At the Tau Zero Foundation, we cover simple solar sails to the seemingly impossible faster-than-light. Rather than trying to identify the best approach, we’re trying to identify the next steps that students can work on to chip away at where their own personal interests lie.

interstellar-distances.jpgMM: Whenever you broach the faster-than-light topic, it inherently includes the time travel issues. I would love to figure out how to do faster-than-light without the time-travel paradoxes. For me, the hardest chapter in my book is the one that deals with quantum entanglement and faster-than-light implications such as time travel. It finally occurred to me what the real problem is — in our normal language, the idea of things that happen simultaneously is difficult to convey. If something happens over at point A, it normally takes a while before the surrounding points even know about it. To have a language of time that also includes both distance and clear distinctions between before and after… I mean the language doesn’t exist to get into those details. So when you’re looking at an experiment where light is split and goes through more than one path, it seems paradoxical that things somehow appear to be connected instantaneously. How can we turn that into sending signals across time? It’s really confusing, but the confusion is largely due to our language in addressing these situations. The discipline is still so young and unfamiliar that it’s hard to even explain what’s going on.

With that said, I will try to explain it. And I’ll try to keep it in terms of the actual physical effects because I know I get frustrated when I hear theorists talk about what the implications of these things are and say, “Where did he get those ideas from? Where’s it anchored in the physical measurements?” Well, the physical measurements derive from taking light and splitting it so that it’s going across more than one path. The original beam is going across more than one path, and you’re going to recombine them and compare them later on. And in each of those separate paths, you do funky things. You either change their length, you send it through a mirror so that one path gets a lot longer, or you modify something. And the odd part is that somehow apparently the change you made in one path affects the other path before they recombine. And you can tell the difference when they recombine. So, what’s going on there? Are each of those fragments of light somehow still connected even after they’re separated? Or is there some way that maybe there are other relations going on that have to make a full-closed loop before the one single way that reality will precipitate itself?

When you do the mathematics for a particle or light moving from one place to another, and you do it in the quantum way (waves rather than particles), you create a situation where you notice the math says there’s a part moving forward in time, and there’s a part moving backward in time. When the mathematics of both of these coalesce — in other words, they’re both saying the same thing — well, that’s what happens in nature. Are phenomena connected in some way where the only things that are allowed to happen are the ones where there is a balance in the flow of time, and other things are just not allowed? So the light might not necessarily be connected from one beam to the other in a direct sense, but in how each of them is related though a flow of time and events in that there are a multitude of things that could happen, but the only things that balance out are the ones that give the appearance that they are connected in time. What this confusing stuff boils down to is that we do not know yet whether quantum entanglement is definitely showing instantaneous connections or if there’s something else going on. The faster-than-light implications of that for quantum tunneling are still uncertain. [Editor’s note: quantum tunneling is the weird quantum process by which a quantum particle passes through a potential barrier that a classical particle can’t traverse.] We’re barely beginning to figure out how to ask the right questions to decide what the most revealing and least confusing experiments are.

colliers-space-program.jpgh+: Now that the Constellation program has been cancelled, will we ever get to interstellar missions?

MM: It comes as no surprise to me that the Constellation program was cancelled because there just was never any budget for it. It seemed like NASA was being asked to do more than they were paid to do, and like the Black Knight in Monty Python’s Holy Grail not having the right amount of arms and legs to do the job, they were still diligently trying. What’s going on here? Perhaps it’s just the normal progression of a maturing organization. Do you remember all the Collier magazine images of man conquering space that von Braun put together? [Editor’s note: this was a series of six articles on rocketry and space travel illustrated by Chesley Bonestell and published in Collier’s magazine in 1952-1954.] It’s almost like the articles cast this image of what space flight is supposed to be into our collective social psyche. Since then, we’ve never really had a newer vision to supplant it.

Eventually we’re going reach a point where we’ve got to rethink this image of space travel. Ever since the cuts to Apollo back during the Nixon administration, the budgets for NASA have remained at about the same level. There have been fluctuations, but it appears that our nation’s legislators and administrators have decided that this is what the space program is worth — and it’s remained relatively the same. It seems like the budget has been stuck in the mode of “we’ve got to finish those von Braun images and maybe next year we’ll get the budget we need.” Even when President Bush said we’re going to go back to space, and yes, I’m going to give you a budget… well, he never did.

h+: Do you see NASA becoming more like DARPA?

MM: The talk I’ve heard is that NASA is trying to get back to more of a research mode. How will they handle it? I don’t know. DARPA has a ton of money. And even the small amount devoted to space is substantial. I don’t have the numbers in front of me, but it’s by no means a trivial amount. To avoid getting stuck in the same mindset, the people who manage the budget serve (I think) six year terms. But many of the advances that will eventually make it possible to explore and take advantage of things in the solar system and beyond are not short-term prospects.

earth-mars-venus-sun.jpgh+: President Bush did set a goal to go to Mars at one point. What’s the most realistic way to get to Mars?

MM: There’s any number of ways of doing it depending upon what you want to do when you get there — perhaps set up some sort of permanent habitat. I see this as the long-term goal: living somewhere else other than Earth to help assure humanity’s survival.

The idea of practicing by first living on the Moon sounds like a smart way to go. The Moon’s a lot closer — 3 days away, roughly — and there’s an existing infrastructure of launch vehicles to move things from here to there. You can experiment with humans living away from Earth without it being quite as dangerous as Mars. We could spend a few years doing that before taking the big step to Mars. So in that case, the Moon makes sense.

But if you’re on like an Apollo-like push — the cliché NASA uses is “flags and footprints” — then you don’t need the Moon. The propulsion options are numerous depending upon how much you want to take with you. This is a completely different direction than using the Moon as a stepping stone. You can go to Mars with standard chemical propulsion, but the catch is that you need enormous amounts of propellant and it will take you a fairly long time to get there. You could do it with nuclear-electric propulsion, where you need a nuclear power source that provides electricity to drive ion thrusters. That requires a lot less propellant. In that case, when you get there, you still have a substantial power source that you can use for things that you need to do there. And then there’s either nuclear-thermal propulsion or variations where the nuclear reactor is part of the actual rocket engine. Based on very crude estimates — and the details in the studies vary — this might be the fastest, although you still need a power supply to be able to do anything when you get there.

h+: What about a space elevator on either Mars or the Earth as a means of getting to low orbit?

MM: When it comes to space elevators, I have no idea whether it would be feasible on Mars — I’ve never seen the numbers worked out. A space elevator from Earth could work in principle, but the make-or-break issue is having the tether (the actual line) strong enough and not too heavy. I think this is beyond materials that can be foreseen today, but that are still possible. Space elevators to me are on the edge of the unknown, and chipping away at them will provide some good learning experiences.

h+: “Scramjets” are another technology mentioned as a possible transport.

MM: Sometimes they’re called RBCC or Rocket-Based Combined Cycle where you use jets or scramjets or combinations of vehicles to get to higher altitudes. These are good options to consider in the suite of possibilities of getting up there. I’m not sure which of them would be the best approach, but they certainly should be considered as a way to get into orbit. It’s a matter of engineering optimization rather than feasibility. In principle, they are fairly straightforward and you can carry quite a bit if you only have to go as far as lower Earth orbit. And now you have space entrepreneurs — the ones that are going to giving joy rides — who are not quite getting into Earth orbit just yet. This is much more difficult than just getting into space and coming back down. It’s interesting that the technologies they’re using are different than the ones that have been used for NASA programs. For what they’re doing, the market they’re going after, what they’re doing makes sense. The more players trying to make progress, the more likely that one of them is going to succeed.

h+: What sort of breakthroughs will it take for the average person to be able to take a multi-day holiday in space for about the same price as going to Europe?

MM: Well, the way the dollars-to-euros conversion is going (laughs)… you currently have SpaceShipTwo and there is experimentation with orbiting platforms to provide the technologies for creating orbiting hotels. $200K is the current price for a SpaceShipTwo ride, but that will come down over time. But safety is a major concern. With Apollo, we were lucky for so long and had so many engineering successes, that it’s become a hard act to follow. I almost wonder sociologically and historically that it might have been better to have more accidents along the way to show that danger is a part of it. Part of the allure in trying to live beyond Earth — which I think we’ll have to do to assure humanity’s survival — is that it’s worth trying to take some of those risks.

spaceshiptwo.jpgConceivably within our lifetimes, orbiting hotels are a possibility. Ah, but here’s the rub. You’ve heard of space sickness, right? If you’re planning an overnighter or a weekend in space for that zero-G sex experience with your significant other, you might want to stay a little bit longer. During your first two days up there, you’re going to have bad headaches, back pains, swelling, and nausea. You’ll want to be able to at least adapt and then enjoy yourself. Book more than two days! It’s definitely harder than jet lag, but adaptable.

Source | H+ Magazine

Videos of UKH+ talks

Monday, February 22nd, 2010

UK Humanity+ has frequent meetings, often with very interesting and informative talks. Luckily for everybody not able to attend them, there’s a YouTube channel.

Source | Humanity Plus

Ontociders, Killer Robots and Physical/Digital Reality

Monday, February 22nd, 2010

Esquire Magazine last month published a serious, reasonable article by Stephen Poole on the Singularity and advancing AGI… just don’t mind the Terminator-ish title: “The Rise of the Machines

Poole presents a balanced range of views on how soon human-level AGI will come, and expresses some understandable worry about the possibility of superintelligent military battle-bots

Also on the theme of potential scary world-outcomes, H+ Assistant Director Marcelo Rinesi has released a fascinating new novella entitled “The Ontociders“, which the author describes as A short novella of multiple apocalypses, casual violence, and genocidal insanity — and those are just the good guys.

I read Ontociders during the recent DC “snowpocalypse” which shut down the US capital for a week, which felt somehow appropriate.   Thematically it’s a bit in the vein of Kafka, Lem’s Futurological Congress or Dick’s Three Stigmata … but the stylistic vibe is more cyberpunk or manga … one feels the characters are inside a giant multiversal video game and it isn’t clear in what sense anything is actually happening … but still the fast-paced action and drama and experience continue … wait, that’s a bit like this universe we find ourselves stuck in, isn’t it?

After reading Ontociders, it was amusing to randomly happen on Tish Shute’s blog post about augmented reality: The Physical World Becomes a Software Construct … which, like the Esquire article mentioned above, quotes Vernor Vinge …

It all really makes one wonder.  “Life”, “death”, “physical” and “software” are all just part of our limited pre-transhuman-era concept-sphere, right?  Yet still they have their elemental importance, just as does the conscious experience of every sentient being.  What will we think of all this in a few decades or centuries when our minds have been vastly improved and expanded?  Let’s hope we live long enough to find out … and avoid the battle-bots and Ontociders on the path to a positive transhuman future… (and not only hope, but work to make it so…)

And in that vein, I’ll give you one more link … a reminder to read Stephan Pernar’s fascinating AGI/Singularity novel Jame5, which presents one detailed strategy for increasing the odds that superhuman AGIs, when created, will be positive for all sentient beings….  Without giving away the plot, let me just say that the strategy has to do with the interesting relationship between physical and digital reality…

Source | Humanity Plus

A midday nap markedly boosts the brain’s learning capacity

Monday, February 22nd, 2010

If you see a student dozing in the library or a co-worker catching 40 winks in her cubicle, don’t roll your eyes. New research from the University of California, Berkeley, shows that an hour’s nap can dramatically boost and restore your brain power. Indeed, the findings suggest that a biphasic sleep schedule not only refreshes the mind, but can make you smarter.

Conversely, the more hours we spend awake, the more sluggish our minds become, according to the findings. The results support previous data from the same research team that pulling an all-nighter – a common practice at college during midterms and finals — decreases the ability to cram in new facts by nearly 40 percent, due to a shutdown of regions during .

not only rights the wrong of prolonged wakefulness but, at a neurocognitive level, it moves you beyond where you were before you took a nap,” said Matthew Walker, an assistant professor of psychology at UC Berkeley and the lead investigator of these studies.

In the recent UC Berkeley sleep study, 39 healthy young adults were divided into two groups – nap and no-nap. At noon, all the participants were subjected to a rigorous learning task intended to tax the , a region of the brain that helps store fact-based memories. Both groups performed at comparable levels.

At 2 p.m., the nap group took a 90-minute siesta while the no-nap group stayed awake. Later that day, at 6 p.m., participants performed a new round of learning exercises. Those who remained awake throughout the day became worse at learning. In contrast, those who napped did markedly better and actually improved in their capacity to learn.

These findings reinforce the researchers’ hypothesis that sleep is needed to clear the brain’s short-term storage and make room for new information, said Walker, who is presenting his preliminary findings on Sunday, Feb. 21, at the annual meeting of the American Association of the Advancement of Science (AAAS) in San Diego, Calif.

Since 2007, Walker and other sleep researchers have established that fact-based memories are temporarily stored in the hippocampus before being sent to the brain’s prefrontal cortex, which may have more storage space.

“It’s as though the e-mail inbox in your hippocampus is full and, until you sleep and clear out those fact e-mails, you’re not going to receive any more mail. It’s just going to bounce until you sleep and move it into another folder,” Walker said.

In the latest study, Walker and his team have broken new ground in discovering that this memory- refreshing process occurs when nappers are engaged in a specific stage of sleep. Electroencephalogram tests, which measure electrical activity in the brain, indicated that this refreshing of memory capacity is related to Stage 2 non-REM sleep, which takes place between deep sleep (non-REM) and the dream state known as Rapid Eye Movement (REM). Previously, the purpose of this stage was unclear, but the new results offer evidence as to why humans spend at least half their sleeping hours in Stage 2, non-REM, Walker said.

“I can’t imagine Mother Nature would have us spend 50 percent of the night going from one sleep stage to another for no reason,” Walker said. “Sleep is sophisticated. It acts locally to give us what we need.”

Walker and his team will go on to investigate whether the reduction of sleep experienced by people as they get older is related to the documented decrease in our ability to learn as we age. Finding that link may be helpful in understanding such neurodegenerative conditions as Alzheimer’s disease, Walker said.

Source | Physorg

Does Google Make Us Stupid?

Monday, February 22nd, 2010

1499-1.jpgRespondents to the fourth “Future of the Internet” survey, conducted by the Pew Internet & American Life Project and Elon University’s Imagining the Internet Center, were asked to consider the future of the internet-connected world between now and 2020 and the likely innovation that will occur. The survey required them to assess 10 different “tension pairs” – each pair offering two different 2020 scenarios with the same overall theme and opposite outcomes – and to select the one most likely choice of two statements. Although a wide range of opinion from experts, organizations, and interested institutions was sought, this survey, fielded from Dec. 2, 2009 to Jan. 11, 2010, should not be taken as a representative canvassing of internet experts. By design, the survey was an “opt in,” self-selecting effort.Among the issues addressed in the survey was the provocative question raised by eminent tech scholar Nicholas Carr in a cover story for the Atlantic Monthly magazine in the summer of 2009: “Is Google Making us Stupid?” Carr argued that the ease of online searching and distractions of browsing through the web were possibly limiting his capacity to concentrate. “I’m not thinking the way I used to,” he wrote, in part because he is becoming a skimming, browsing reader, rather than a deep and engaged reader. “The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas…. If we lose those quiet spaces, or fill them up with ‘content,’ we will sacrifice something important not only in our selves but in our culture.”

Jamais Cascio, an affiliate at the Institute for the Future and senior fellow at the Institute for Ethics and Emerging Technologies, challenged Carr in a subsequent article in the Atlantic Monthly. Cascio made the case that the array of problems facing humanity – the end of the fossil-fuel era, the fragility of the global food web, growing population density, and the spread of pandemics, among others – will force us to get smarter if we are to survive. “Most people don’t realize that this process is already under way,” he wrote. “In fact, it’s happening all around us, across the full spectrum of how we understand intelligence. It’s visible in the hive mind of the Internet, in the powerful tools for simulation and visualization that are jump-starting new scientific disciplines, and in the development of drugs that some people (myself included) have discovered let them study harder, focus better, and stay awake longer with full clarity.” He argued that while the proliferation of technology and media can challenge humans’ capacity to concentrate there were signs that we are developing “fluid intelligence-the ability to find meaning in confusion and solve new problems, independent of acquired knowledge.” He also expressed hope that techies will develop tools to help people find and assess information smartly.

With that as backdrop, respondents were asked to indicate which of two statements best reflected their view on Google’s effect on intelligence. The chart shows the distribution of responses to the paired statements. The first column covers the answers of 371 longtime experts who have regularly participated in these surveys. The second column covers the answers of all the respondents, including the 524 who were recruited by other experts or by their association with the Pew Internet Project. As shown, 76% of the experts agreed with the statement, “By 2020, people’s use of the internet has enhanced human intelligence; as people are allowed unprecedented access to more information they become smarter and make better choices. Nicholas Carr was wrong: Google does not make us stupid.”

1499-9.jpgRespondents were also asked to “share your view of the internet’s influence on the future of human intelligence in 2020 — what is likely to stay the same and what will be different in the way human intellect evolves?” What follows is a selection of the hundreds of written elaborations and some of the recurring themes in those answers:

Nicholas Carr and Google staffers have their say:

• “I feel compelled to agree with myself. But I would add that the Net’s effect on our intellectual lives will not be measured simply by average IQ scores. What the Net does is shift the emphasis of our intelligence, away from what might be called a meditative or contemplative intelligence and more toward what might be called a utilitarian intelligence. The price of zipping among lots of bits of information is a loss of depth in our thinking.”– Nicholas Carr

•  “My conclusion is that when the only information on a topic is a handful of essays or books, the best strategy is to read these works with total concentration. But when you have access to thousands of articles, blogs, videos, and people with expertise on the topic, a good strategy is to skim first to get an overview. Skimming and concentrating can and should coexist. I would also like to say that Carr has it mostly backwards when he says that Google is built on the principles of Taylorism [the institution of time-management and worker-activity standards in industrial settings]. Taylorism shifts responsibility from worker to management, institutes a standard method for each job, and selects workers with skills unique for a specific job. Google does the opposite, shifting responsibility from management to the worker, encouraging creativity in each job, and encouraging workers to shift among many different roles in their career….Carr is of course right that Google thrives on understanding data. But making sense of data (both for Google internally and for its users) is not like building the same artifact over and over on an assembly line; rather it requires creativity, a mix of broad and deep knowledge, and a host of connections to other people. That is what Google is trying to facilitate.” — Peter Norvig, Google Research Director

•  “Google will make us more informed. The smartest person in the world could well be behind a plow in China or India. Providing universal access to information will allow such people to realize their full potential, providing benefits to the entire world.” – Hal Varian, Google, chief economist

The resources of the internet and search engines will shift cognitive capacities. We won’t have to remember as much, but we’ll have to think harder and have better critical thinking and analytical skills. Less time devoted to memorization gives people more time to master those new skills.

•  “Google allows us to be more creative in approaching problems and more integrative in our thinking. We spend less time trying to recall and more time generating solutions.” — Paul Jones, ibiblio, University of North Carolina – Chapel Hill

• “Google will make us stupid and intelligent at the same time. In the future, we will live in a transparent 3D mobile media cloud that surrounds us everywhere. In this cloud, we will use intelligent machines, to whom we delegate both simple and complex tasks. Therefore, we will lose the skills we needed in the old days (e.g., reading paper maps while driving a car). But we will gain the skill to make better choices (e.g., knowing to choose the mortgage that is best for you instead of best for the bank). All in all, I think the gains outweigh the losses.” — Marcel Bullinga, Dutch Futurist at futurecheck.com

•  “I think that certain tasks will be ‘offloaded’ to Google or other Internet services rather than performed in the mind, especially remembering minor details. But really, that is a role that paper has taken over many centuries: did Gutenberg make us stupid? On the other hand, the Internet is likely to be front-and-centre in any developments related to improvements in neuroscience and human cognition research.” — Dean Bubley, wireless industry consultant

• “What the internet (here subsumed tongue-in-cheek under “Google”) does is to support SOME parts of human intelligence, such as analysis, by REPLACING other parts such as memory. Thus, people will be more intelligent about, say, the logistics of moving around a geography because “Google” will remember the facts and relationships of various locations on their behalf. People will be better able to compare the revolutions of 1848 and 1789 because “Google” will remind them of all the details as needed. This is the continuation ad infinitum of the process launched by abacuses and calculators: we have become more “stupid” by losing our arithmetic skills but more intelligent at evaluating numbers.” — Andreas Kluth, writer, Economist magazine

• “It’s a mistake to treat intelligence as an undifferentiated whole. No doubt we will become worse at doing some things (‘more stupid’) requiring rote memory of information that is now available though Google. But with this capacity freed, we may (and probably will) be capable of more advanced integration and evaluation of information (‘more intelligent’).” — Stephen Downes, National Research Council, Canada

• “The new learning system, more informal perhaps than formal, will eventually win since we must use technology to cause everyone to learn more, more economically and faster if everyone is to be economically productive and prosperous. Maintaining the status quo will only continue the existing win/lose society that we have with those who can learn in present school structure doing ok, while more and more students drop out, learn less, and fail to find a productive niche in the future.” —  Ed Lyell, former member of the Colorado State Board of Education and Telecommunication Advisory Commission

• “The question is flawed: Google will make intelligence different. As Carr himself suggests, Plato argued that reading and writing would make us stupid, and from the perspective of a preliterate, he was correct. Holding in your head information that is easily discoverable on Google will no longer be a mark of intelligence, but a side-show act. Being able to quickly and effectively discover information and solve problems, rather than do it “in your head,” will be the metric we use.” — Alex Halavais, vice president, Association of Internet Researchers

•  “What Google does do is simply to enable us to shift certain tasks to the network — we no longer need to rote-learn certain seldomly-used facts (the periodic table, the post code of Ballarat) if they’re only a search away, for example. That’s problematic, of course — we put an awful amount of trust in places such as Wikipedia where such information is stored, and in search engines like Google through which we retrieve it — but it doesn’t make us stupid, any more than having access to a library (or in fact, access to writing) makes us stupid. That said, I don’t know that the reverse is true, either: Google and the Net also don’t automatically make us smarter. By 2020, we will have even more access to even more information, using even more sophisticated search and retrieval tools — but how smartly we can make use of this potential depends on whether our media literacies and capacities have caught up, too.” — Axel Bruns, Associate Professor, Queensland University of Technology

• “My ability to do mental arithmetic is worse than my grandfather’s because I grew up in an era with pervasive personal calculators…. I am not stupid compared to my grandfather, but I believe the development of my brain has been changed by the availability of technology. The same will happen (or is happening) as a result of the Googleization of knowledge. People are becoming used to bite sized chunks of information that are compiled and sorted by an algorithm. This must be having an impact on our brains, but it is too simplistic to say that we are becoming stupid as a result of Google.” — Robert Acklund, Australian National University

• “We become adept at using useful tools, and hence perfect new skills. Other skills may diminish. I agree with Carr that we may on the average become less patient, less willing to read through a long, linear text, but we may also become more adept at dealing with multiple factors…. Note that I said ‘less patient,’ which is not the same as ‘lower IQ.’ I suspect that emotional and personality changes will probably more marked than ‘intelligence’ changes.” — Larry Press, California State University, Dominguz Hills

Technology isn’t the problem here. It is people’s inherent character traits. The internet and search engines just enable people to be more of what they already are. If they are motivated to learn and shrewd, they will use new tools to explore in exciting new ways. If they are lazy or incapable of concentrating, they will find new ways to be distracted and goof off.

• “The question is all about people’s choices. If we value introspection as a road to insight, if we believe that long experience with issues contributes to good judgment on those issues, if we (in short) want knowledge that search engines don’t give us, we’ll maintain our depth of thinking and Google will only enhance it. There is a trend, of course, toward instant analysis and knee-jerk responses to events that degrades a lot of writing and discussion. We can’t blame search engines for that…. What search engines do is provide more information, which we can use either to become dilettantes (Carr’s worry) or to bolster our knowledge around the edges and do fact-checking while we rely mostly on information we’ve gained in more robust ways for our core analyses. Google frees the time we used to spend pulling together the last 10% of facts we need to complete our research. I read Carr’s article when The Atlantic first published it, but I used a web search to pull it back up and review it before writing this response. Google is my friend.” — Andy Oram, editor and blogger, O’Reilly Media

•  “Google isn’t making us stupid — but it is making many of us intellectually lazy. This has already become a big problem in university classrooms. For my undergrad majors in Communication Studies, Google may take over the hard work involved in finding good source material for written assignments. Unless pushed in the right direction, students will opt for the top 10 or 15 hits as their research strategy. And it’s the students most in need of research training who are the least likely to avail themselves of more sophisticated tools like Google Scholar. Like other major technologies, Google’s search functionality won’t push the human intellect in one predetermined direction. It will reinforce certain dispositions in the end-user: stronger intellects will use Google as a creative tool, while others will let Google do the thinking for them.” — David Ellis, York University, Toronto

•  “For people who are readers and who are willing to explore new sources and new arguments, we can only be made better by the kinds of searches we will be able to do. Of course, the kind of Googled future that I am concerned about is the one in which my every desire is anticipated, and my every fear avoided by my guardian Google. Even then, I might not be stupid, just not terribly interesting.” — Oscar Gandy, emeritus professor, University of Pennsylvania

• “I don’t think having access to information can ever make anyone stupider. I don’t think an adult’s IQ can be influenced much either way by reading anything and I would guess that smart people will use the Internet for smart things and stupid people will use it for stupid things in the same way that smart people read literature and stupid people read crap fiction. On the whole, having easy access to more information will make society as a group smarter though.” — Sandra Kelly, market researcher, 3M Corporation

•  “The story of humankind is that of work substitution and human enhancement. The Neolithic revolution brought the substitution of some human physical work by animal work. The Industrial revolution brought more substitution of human physical work by machine work. The Digital revolution is implying a significant substitution of human brain work by computers and ICTs in general. Whenever a substitution has taken place, men have been able to focus on more qualitative tasks, entering a virtuous cycle: the more qualitative the tasks, the more his intelligence develops; and the more intelligent he gets, more qualitative tasks he can perform…. As obesity might be the side-effect of physical work substitution by machines, mental laziness can become the watermark of mental work substitution by computers, thus having a negative effect instead of a positive one.” — Ismael Peña-Lopez, lecturer at the Open University of Catalonia, School of Law and Political Science

• “Well, of course, it depends on what one means by ‘stupid’ — I imagine that Google, and its as yet unimaginable new features and capabilities will both improve and decrease some of our human capabilities. Certainly it’s much easier to find out stuff, including historical, accurate, and true stuff, as well as entertaining, ironic, and creative stuff. It’s also making some folks lazier, less concerned about investing in the time and energy to arrive at conclusions, etc.” — Ron Rice, University of California, Santa Barbara

•  ”Nick [Carr] says, ‘Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.’ Besides finding that a little hard to believe (I know Nick to be a deep diver, still), there is nothing about Google, or the Net, to keep anyone from diving — and to depths that were not reachable before the Net came along.”– Doc Searls, co-author of “The Cluetrain Manifesto”

It’s not Google’s fault if users create stupid queries.

•  “To be more precise, unthinking use of the Internet, and in particular untutored use of Google, has the ability to make us stupid, but that is not a foregone conclusion. More and more of us experience attention deficit, like Bruce Friedman in the Nicholas Carr article, but that alone does not stop us making good choices provided that the ‘factoids’ of information are sound that we use to make out decisions. The potential for stupidity comes where we rely on Google (or Yahoo, or Bing, or any engine) to provide relevant information in response to poorly constructed queries, frequently one-word queries, and then base decisions or conclusions on those returned items.” — Peter Griffiths, former Head of Information at the Home Office within the Office of the Chief Information Officer, United Kingdom

• “The problem isn’t Google; it’s what Google helps us find. For some, Google will let them find useless content that does not challenge their minds. But for others, Google will lead them to expect answers to questions, to explore the world, to see and think for themselves.” — Esther Dyson, longtime internet expert and investor

•  “People are already using Google as an adjunct to their own memory. For example, I have a hunch about something, need facts to support, and Google comes through for me. Sometimes, I see I’m wrong, and I appreciate finding that out before I open my mouth.” — Craig Newmark, founder Craig’s List

• “Google is a data access tool. Not all of that data is useful or correct. I suspect the amount of misleading data is increasing faster than the amount of correct data. There should also be a distinction made between data and information. Data is meaningless in the absence of an organizing context. That means that different people looking at the same data are likely to come to different conclusions. There is a big difference with what a world class artist can do with a paint brush as opposed to a monkey. In other words, the value of Google will depend on what the user brings to the game. The value of data is highly dependent on the quality of the question being asked.” — Robert Lunn, consultant, FocalPoint Analytics

The big struggle is over what kind of information Google and other search engines kick back to users. In the age of social media where users can be their own content creators it might get harder and harder to separate high-quality material from junk.

• “Access to more information isn’t enough — the information needs to be correct, timely, and presented in a manner that enables the reader to learn from it. The current network is full of inaccurate, misleading, and biased information that often crowds out the valid information. People have not learned that ‘popular’ or ‘available’ information is not necessarily valid.”– Gene Spafford, Purdue University CERIAS, Association for Computing Machinery U.S. Public Policy Council

•  ”If we take ‘Google’ to mean the complex social, economic and cultural phenomenon that is a massively interactive search and retrieval information system used by people and yet also using them to generate its data, I think Google will, at the very least, not make us smarter and probably will make us more stupid in the sense of being reliant on crude, generalised approximations of truth and information finding. Where the questions are easy, Google will therefore help; where the questions are complex, we will flounder.” — Matt Allen, former president of the Association of Internet Researchers and associate professor of internet studies at Curtin University in Australia

•  “The challenge is in separating that wheat from the chaff, as it always has been with any other source of mass information, which has been the case all the way back to ancient institutions like libraries. Those users (of Google, cable TV, or libraries) who can do so efficiently will beat the odds, becoming ‘smarter’ and making better choices. However, the unfortunately majority will continue to remain, as Carr says, stupid.” — Christopher Saunders, managing editor, internetnews.com

• “The problem with Google that is lurking just under the clean design home page is the “tragedy of the commons”: the link quality seems to go down every year. The link quality may actually not be going down but the signal to noise is getting worse as commercial schemes lead to more and more junk links.” — Glen Edens, former senior vice president and director at Sun Microsystems Laboratories, chief scientist Hewlett Packard

Literary intelligence is very much under threat.

•  “If one defines — or partially defines — IQ as literary intelligence, the ability to sit with a piece of textual material and analyze it for complex meaning and retain derived knowledge, then we are indeed in trouble. Literary culture is in trouble…. We are spending less time reading books, but the amount of pure information that we produce as a civilization continues to expand exponentially. That these trends are linked, that the rise of the latter is causing the decline of the former, is not impossible…. One could draw reassurance from today’s vibrant Web culture if the general surfing public, which is becoming more at home in this new medium, displayed a growing propensity for literate, critical thought. But take a careful look at the many blogs, post comments, Facebook pages, and online conversations that characterize today’s Web 2.0 environment…. This type of content generation, this method of ‘writing,’ is not only sub-literate, it may actually undermine the literary impulse…. Hours spent texting and e-mailing, according to this view, do not translate into improved writing or reading skills.” — Patrick Tucker, senior editor, The Futurist magazine

New literacies will be required to function in this world. In fact, the internet might change the very notion of what it means to be smart. Retrieval of good information will be prized. Maybe a race of “extreme Googlers” will come into being.

•  “The critical uncertainty here is whether people will learn and be taught the essential literacies necessary for thriving in the current infosphere: attention, participation, collaboration, crap detection, and network awareness are the ones I’m concentrating on. I have no reason to believe that people will be any less credulous, gullible, lazy, or prejudiced in ten years, and am not optimistic about the rate of change in our education systems, but it is clear to me that people are not going to be smarter without learning the ropes.” — Howard Rheingold, author of several prominent books on technology, teacher at Stanford University and University of California-Berkeley

•  ”Google makes us simultaneously smarter and stupider. Got a question? With instant access to practically every piece of information ever known to humankind, we take for granted we’re only a quick web search away from the answer. Of course, that doesn’t mean we understand it. In the coming years we will have to continue to teach people to think critically so they can better understand the wealth of information available to them.” — Jeska Dzwigalski, Linden Lab 

•  ”We might imagine that in ten years, our definition of intelligence will look very different. By then, we might agree on ‘smart’ as something like a ‘networked’ or ‘distributed’ intelligence where knowledge is our ability to piece together various and disparate bits of information into coherent and novel forms.” — Christine Greenhow, educational researcher, University of Minnesota and Yale Information and Society Project

•  ”Human intellect will shift from the ability to retain knowledge towards the skills to discover the information i.e. a race of extreme Googlers (or whatever discovery tools come next). The world of information technology will be dominated by the algorithm designers and their librarian cohorts. Of course, the information they’re searching has to be right in the first place. And who decides that?” — Sam Michel, founder Chinwag, community for digital media practitioners in the United Kingdom

One new “literacy” that might help is the capacity to build and use social networks to help people solve problems.

• “There’s no doubt that the internet is an extension of human intelligence, both individual and collective. But the extent to which it’s able to augment intelligence depends on how much people are able to make it conform to their needs. Being able to look up who starred in the 2nd season of the Tracey Ullman show on Wikipedia is the lowest form of intelligence augmentation; being able to build social networks and interactive software that helps you answer specific questions or enrich your intellectual life is much more powerful. This will matter even more as the internet becomes more pervasive. Already my iPhone functions as the external, silicon lobe of my brain. For it to help me become even smarter, it will need to be even more effective and flexible than it already is. What worries me is that device manufacturers and internet developers are more concerned with lock-in than they are with making people smarter. That means it will be a constant struggle for individuals to reclaim their intelligence from the networks they increasingly depend upon.” — Dylan Tweney, senior editor, Wired magazine

Nothing can be bad that delivers more information to people, more efficiently. It might be that some people lose their way in this world, but overall, societies will be substantially smarter.

•  “The Internet has facilitated orders of magnitude improvements in access to information. People now answer questions in a few moments that a couple of decades back they would not have bothered to ask, since getting the answer would have been impossibly difficult.” — John Pike, Director, globalsecurity.org

• “Google is simply one step, albeit a major one, in the continuing continuum of how technology changes our generation and use of data, information, and knowledge that has been evolving for decades. As the data and information goes digital and new information is created, which is at an ever increasing rate, the resultant ability to evaluate, distill, coordinate, collaborate, problem solve only increases along a similar line. Where it may appear a ‘dumbing down’ has occurred on one hand, it is offset (I believe in multiples) by how we learn in new ways to learn, generate new knowledge, problem solve, and innovate.” — Mario Morino, Chairman, Venture Philanthropy Partners

Google itself and other search technologies will get better over time and that will help solve problems created by too-much-information and too-much-distraction.

• “I’m optimistic that Google will get smarter by 2020 or will be replaced by a utility that is far better than Google. That tool will allow queries to trigger chains of high-quality information — much closer to knowledge than flood. Humans who are able to access these chains in high-speed, immersive ways will have more patters available to them that will aid decision-making. All of this optimism will only work out if the battle for the soul of the Internet is won by the right people — the people who believe that open, fast, networks are good for all of us.” — Susan Crawford, former member of President Obama’s National Economic Council, now on the law faculty at the University of Michigan

•  ”If I am using Google to find an answer, it is very likely the answer I find will be on a message board in which other humans are collaboratively debating answers to questions. I will have to choose between the answer I like the best. Or it will force me to do more research to find more information. Google never breeds passivity or stupidity in me: It catalyzes me to explore further. And along the way I bump into more humans, more ideas and more answers.” — Joshua Fouts, Senior Fellow for Digital Media & Public Policy at the Center for the Study of the Presidency

The more we use the internet and search, the more dependent on it we will become.

•  ”As the Internet gets more sophisticated it will enable a greater sense of empowerment among users. We will not be more stupid, but we will probably be more dependent upon it.” — Bernie Hogan, Oxford Internet Institute

Even in little ways, including in dinner table chitchat, Google can make people smarter.

• “[Family dinner conversations] have changed markedly because we can now look things up at will. That’s just one small piece of evidence I see that having Google at hand is great for civilization.” — Jerry Michalski, president, Sociate

‘We know more than ever, and this makes us crazy.’

• “The answer is really: both. Google has already made us smarter, able to make faster choices from more information. Children, to say nothing of adults, scientists and professionals in virtually every field, can seek and discover knowledge in ways and with scope and scale that was unfathomable before Google. Google has undoubtedly expanded our access to knowledge that can be experienced on a screen, or even processed through algorithms, or mapped. Yet Google has also made us careless too, or stupid when, for instance, Google driving directions don’t get us to the right place. It has confused and overwhelmed us with choices, and with sources that are not easily differentiated or verified. Perhaps it’s even alienated us from the physical world itself — from knowledge and intelligence that comes from seeing, touching, hearing, breathing and tasting life. From looking into someone’s eyes and having them look back into ours. Perhaps it’s made us impatient, or shortened our attention spans, or diminished our ability to understand long thoughts. It’s enlightened anxiety. We know more than ever, and this makes us crazy.” — Andrew Nachison, co-founder, We Media

A final thought: Maybe Google won’t make us more stupid, but it should make us more modest.

•  “There is and will be lots more to think about, and a lot more are thinking. No, not more stupid. Maybe more humble.” — Sheizaf Rafaeli, Center for the Study of the Information Society, University of Haifa

Source | Pew Research

Using Machine Intelligence to Extend our Lives

Monday, February 22nd, 2010

Inventor, futurist and engineer Peter Voss describes a new kind of A.I. (artificial intelligence) which he calls “artificial general intelligence.” He further explains how he’s employed it to create voice activated call centers at his new company, Adaptive AI, Inc.

Voss visualizes a future where researchers will have an army of virtual research assistants to advance the cause of longevity science.

Click here to find out more! Researcher creates ‘Facebook for Scientists’

Friday, February 19th, 2010

madisch.jpgImagine how much sooner Dr. Jonas Salk could have discovered the polio vaccine if in 1955 if he was on Facebook. Often, researchers work in a vacuum. They can be stuck on a problem blocking progress on their research that someone on the other side of the world has already solved. Yes, there’s a wealth of information online and in scientific journals, but what if there were one central place online where a researcher could ask a question and someone else could answer it?

Enter ResearchGATE, which its founder Dr. Ijad Madisch (pictured) fairly describes as “Facebook for scientists.” In close to two years of operation, ResearchGate has built a social network of more than 250,000 researchers from 196 countries. Over 1,000 subgroups have been formed for specific disciplines, and 60,000 research documents have been uploaded for sharing with others on the site. These guys aren’t pretending they’re farmers.

“People ask a question, presenting an issue they have in the lab, and anyone can answer the question. This is happening on a daily basis,” said Madisch, who was in Silicon Valley this week drumming up support for ResearchGATE from researchers at universities and private research labs, while also networking with potential investors, although he added the company is currently “well funded.”

His “aha moment” occurred when he was pursuing his PhD in virology at Harvard Medical School. He was communicating via Facebook with a medical school classmate of his in Germany, just to stay in touch socially. “So, we got the idea of ‘Hey, a Facebook for scientists,’ where you can present yourself as a researcher with all the information related to your research and you can find collaborators.”

Here’s a sample post:

I have expressed a 53KDa His-tagged protein in baculovirus system. The size of the protein is not matching with the expected size i.e 52-56KDa. I am getting the band at 65KDa in SDS-PAGE. I am clueless why it is showing more than expected size. Can someone give me some leads to solve this mystery?

Within an hour of posting his query, this poster had five replies.

The value of ResearchGATE is that it can help move a stalled research project forward in ways that haven’t been available before, said Madisch. “Researchers don’t publish negative results, they only publish positive results. But the negative results can lead to the positive results.”

Dr. Rajiv Gupta, a lecturer at the Harvard Medical School and MIT, uses ResearchGATE to post PDFs of his lectures for students to retrieve from the site. “And then they kind of hang around on ResearchGATE because every time they have a question they have a forum to discuss things.” Dr. Gupta also manages a neuroradiology lab at Massachusetts General Hospital in Boston where fellows in the lab also use ResearchGATE to get answers to questions that stymie them.

One thing ResearchGATE has in common with Facebook is that both allow users to adjust Privacy Settings on their accounts. In many cases, researchers have to be guarded about what they share on ResearchGATE either for competitive reasons or because they are subject to nondisclosure agreements.

“Everyone can decide how much information to put into a discussion and how much to help other people,” said Madisch. Users can post a specific query about just one aspect of their project without disclosing what the underlying research is about.

One big way in which ResearchGATE differs from Facebook is that it prohibits members from disclosing information shared on the site with third parties, he added. It accepts no advertising, so you won’t see any “Which Scientists are Searching for You?” ads cluttering the page.

ResearchGATE monetizes the site with a jobs board where, as on Craigslist, employers pay to place help wanted ads. It also sells the ResearchGATE platform to universities and other research institutions that want to set it up for use within their own organizations.

Gupta finds that when he uploads his lectures to the site and people start asking questions, there is so much activity online that he doesn’t have to answer the questions himself because someone else already has.

The beauty of ResearchGATE, he said, is that the members are all there for a common purpose of advancing scientific research. “There are the Facebook-type social networking sites, but they have been sort of populated by teenagers. There is nothing there that is specific to doing research,” Gupta said.

Source | Digital Beat

 

Solar Cells Use Nanoparticles to Capture More Sunlight

Friday, February 19th, 2010

plasmonics_x220.jpgInexpensive thin-film solar cells aren’t as efficient as conventional solar cells, but a new coating that incorporates nanoscale metallic particles could help close the gap. Broadband Solar, a startup spun out of Stanford University late last year, is developing coatings that increase the amount of light these solar cells absorb.

Solar antenna: The square at the center is an array of test solar cells being used to evaluate a coating that contains metallic nanoantennas tuned to the solar spectrum.
Credit: Brongersma lab, Stanford

Based on computer models and initial experiments, an amorphous silicon cell could jump from converting about 8 percent of the energy in light into electricity to converting around 12 percent. That would make such cells competitive with the leading thin-film solar cells produced today, such as those made by First Solar, headquartered in Tempe, AZ, says Cyrus Wadia, codirector of the Cleantech to Market Program in the Haas School of Business at the University of California, Berkeley. Amorphous silicon has the advantage of being much more abundant than the materials used by First Solar. The coatings could also be applied to other types of thin-film solar cells, including First Solar’s, to increase their efficiency.

Broadband believes its coatings won’t increase the cost of these solar cells because they perform the same function as the transparent conductors used on all thin-film cells and could be deposited using the same equipment.

Broadband’s nanoscale metallic particles take incoming light and redirect it along the plane of the solar cell, says Mark Brongersma, professor of materials science and engineering at Stanford and scientific advisor to the company. As a result, each photon takes a longer path through the material, increasing its chances of dislodging an electron before it can reflect back out of the cell. The nanoparticles also increase light absorption by creating strong local electric fields.

The particles, which are essentially nanoscale antennas, are very similar to radio antennas, says Brongersma. They’re much smaller because the wavelengths they interact with are much shorter than radio waves. Just as conventional antennas can convert incoming radio waves into an electrical signal and transmit electrical signals as radio waves, these nanoantennas rely on electrical interactions to receive and transmit light in the optical spectrum.

Their interaction with light is so strong because incoming photons actually couple to the surface of metal nanoparticles in the form of surface waves called plasmons. These so-called plasmonic effects occur in nanostructures made from highly conductive metals such as copper, silver, and gold. Researchers are taking advantage of plasmonic effects to miniaturize optical computers, and to create higher-resolution light microscopes and lithography. Broadband is one of the first companies working to commercialize plasmonic solar cells.

In his lab at Stanford, Brongersma has experimented with different sizes and shapes of metallic nanostructures, using electron-beam lithography to carve them out one at a time. Different sizes and shapes of metal particles interact strongly with different colors of light, and will direct them at varying angles. The ideal solar-cell coating would contain nanoantennas varying in size and shape over just the right range to take advantage of all the wavelengths in the solar spectrum and send them through the cell at wide angles. However, this carving process is too laborious to be commercialized.

Through his work with Broadband, Brongersma is developing a much simpler method for making the tiny antennas over large areas. This involves a technique called “sputter deposition” that’s commonly used in industry to make thin metal films (including those that line some potato-chip bags). Sputtering works by bombarding a substrate with ionized metal. Under the right conditions, he says, “due to surface tension, the metal balls up into particles like water droplets on a waxed car.” The resulting nanoparticles vary in shape and size, which means they’ll interact with different wavelengths of light. “We rely on this randomness” to make the films responsive to the broad spectrum found in sunlight, he says.

Broadband is currently developing sputtering techniques for incorporating metal nanoantennas into transparent conductive oxide films over large areas. Being able to match the large scale of thin-film solar manufacturing will be key to commercializing these coatings.

The company has been using money from angel investors to test its plasmonic coatings on small prototype cells. So far, says Brongersma, enhanced current from the cells matches simulations. Broadband is currently seeking venture funding to scale up its processes, says CEO Anthony Defries.

Source | New Scientist