Archive for the ‘Software’ Category

Formatting Gaia + Technological Symbiosis

Friday, December 2nd, 2011

Patrick Millard | Formatting Gaia + Technological Symbiosis from vasa on Vimeo.

How to communicate better in virtual worlds

Saturday, October 15th, 2011

The experimental setup. Left: The participants wore a total of six tracked objects; right: the corresponding virtual environment, showing the avatars in the self-animated third-person perspective. (Credit: Trevor J. Dodds et al./PLoS One)

Mapping real-world motions to “self-animated” virtual avatars, using body tracking to communicate a wide range of gestures, helps people communicate better in virtual worlds like Second Life, says researchers from the Max Planck Institute for Biological Cybernetics and Korea University.

They conducted two experiments to investigate whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other.

Ref.: Trevor J. Dodds et al., Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments, PLoS One, DOI: 10.1371/journal.pone.0025759 (free access)

Source | KurzweilAI

Tuesday, August 9th, 2011
 

Millenniata's M-Disc is made of a stone-like substance that the company claims does not degrade over time.

Computerworld - Start-up Millenniata and Hitachi-LG Data Storage plan to soon release a new optical disc and read/write player that will store movies, photos or any other data forever. The data can be accessed using any current DVD or Blu-ray player.

Millenniata calls the product the M-Disc, and the company claims you can dip it in liquid nitrogen and then boiling water without harming it. It also has a U.S. Department of Defense (DoD) study backing up the resiliency of its product compared to other leading optical disc competitors.

Millenniata CEO Scott Shumway would not disclose what material is used to produce the optical discs, referring to it only as a “natural” substance that is “stone-like.”

Like DVDs and Blu-ray discs, the M-Disc platters are made up of multiple layers of material. But unlike the former, there is no reflective or die layer. Instead, during the recording process a laser “etches” pits onto the substrate material

“Once the mark is made, it’s permanent,” Shumway said. “It can be read on any machine that can read a DVD. And it’s backward compatible, so it doesn’t require a special machine to read it – just a special machine to write it.”

While Millenniata has partnered with Hitachi-LG Data Storage for the initial launch of an M-Disc read-write player in early October, Shumway said any DVD player maker will be able to produce M-Disc machines by simply upgrading their product’s firmware.

Millenniata said it has also proven it can produce Blu-ray format discs with its technology – a product it plans to release in future iterations. For now, the platters store the same amount of data as a DVD: 4.7GB. However, the discs write at only 4x or 5.28MB/sec, half the speed of today’s DVD players.

“We feel if we can move to the 8X, that’d be great, but we can live with the four for now,” Shumway said, adding that his engineers are working on upping the speed of recording.

Millenniata is also targeting the long-term data archive market, saying archivists will no longer have to worry about controlling the temperature or humidity of a storage room. “Data rot happens with any type of disc you have. Right now, the most permanent technology out there for storing information is a paper and pencil — until now,” Shumway said.

In 2009, the Defense Department’s Naval Air Warfare Weapon’s Division facility at China Lake, Calif. was interested in digitizing and permanently storing information. So it tested Millenniata’s M-Disc against five other optical disc vendors: Delkin Devices, Mitsubishi, JVC, Verbatim and MAM-A.

“None of the Millenniata media suffered any data degradation at all. Every other brand tested showed large increases in data errors after the stress period. Many of the discs were so damaged that they could not be recognized as DVDs by the disc analyzer,” the department’s report states.

Recordable optical media such as CDs, DVDs and Blu-ray discs are made of layers of polycarbonate glued together. One layer of the disk contains a reflective material and a layer just above it incorporates an organic transparent dye. During recording, a laser hits the die layer and burns it, changing the dye from transparent to opaque creating bits of data. A low power laser then can read those bits by either passing through the transparent dye layer to the reflective layer or being absorbed by the pits.

Over long periods of time, DVDs are subject to de-lamination problems where the layers of polycarbonate separate, leading to oxidation and read problems. The dye layer, because its organic, can also break down over time, a process hastened by high temperatures and humidity.

While the DVD industry claims DVDs should last from 50 to 100 years,according to the National Institute of Standards and Technology (NIST), DVDs can break down in “several years” in normal environments. Additionally, NIST suggests DVDs should be stored in spaces where relative humidity is between 20% and 50%, and where temperatures do not drop below 68 degrees Fahrenheit.

Gene Ruth, a research director at Gartner, said generally he’s not heard of a problem with DVD longevity. And, while he admits that a DVD on a car dashboard could be in trouble, the medium has generally had a good track record.

But Ruth said he can see a market in long-term archiving for a product such as the M-Disc because some industries, such as aircraft engineering, healthcare and financial services, store data for a lifetime and beyond.

Millenniata partnered with Hitachi-LG Data Storage to provide M-Ready technology in most of its DVD and Blu-ray drives. Shumway said the products will begin shipping next month and should be in stores in the beginning of October.

“We felt it was important that we first produce this with a major drive manufacturer, someone that already had models and firmware out there,” Shumway said.

Hitachi-LG Data Storage's M-Disc read-write player.

Unlike DVDs, which come in 10-, 25-, 50- or 100-disc packs, M-Discs will be available one at a time, or in groups of two or three for just under $3 per disc. Millenniata is also courting system manufacturers in the corporate archive world.

“We’re working with some very large channels as we train their distribution networks to launch this,” he said. “At the same time, we’re launching this at Fry’s [Electronics] so consumers can see it and be introduced to this technology.”

Source | Computerworld

Researchers Modify Kinect Gaming Device to Scan in 3-D

Sunday, August 7th, 2011

Researchers at the University of California, San Diego, preparing for a future archaeological dig to Jordan, will likely packa Microsoft Kinect Xbox 360 in the field to take high-quality, low-cost 3-D scans of dig sites.

Making a freehand 3-D scan of Tiffany Fox in Calit2's StarCAVE using a modified Kinect. Though Fox's avatar appears to be projected onto the wall behind her, the scan is actually projected in the middle of the StarCAVE

The researchers have figured out a way to extract data streaming from the Kinect’s onboard color camera and infrared sensor to make hand-held 3-D scans of small objects and people. The quickly-made avatars could conceivably be plugged right into virtual worlds such as Second Life.

The ultimate goal, however, is to extend the technology to scan entire buildings and even neighborhoods, the researchers said. For the initial field application of their modified Kinect — dubbed ArKinect (a mashup of archaeology and Kinect) —  the researchers plan to train engineering and archaeology students to use the device to collect data on a future expedition to Jordan.

The scans collected at sites in Jordan or elsewhere can later be made into 3-D models and projected in Calit2’s StarCAVE, a 360-degree, 16-panel immersive virtual reality environment that enables researchers to interact with virtual renderings of objects and environments.

Three-dimensional models of artifacts provide more information than 2-D photographs about the symmetry (and hence quality of craftsmanship, for example) of found artifacts, and 3D models of the dig sites can help archaeologists keep track of the exact locations where artifacts were located

The steps for making a 3D reconstruction of a real-life stuffed bear (far left) include: 1) projecting a pattern of infrared dots onto the bear to construct a depth map (second from left); 2) connecting nearby dots with a triangular mesh grid (third from left); 3) filling in each triangle in the grid with color and texture information from the Kinect's color camera (far right

The ability to operate the Kinect freehand is a huge advantage over other scanning systems like LIDAR (light detecting and ranging), which creates a more accurate scan but has to be kept stationary in order to be precisely aimed.

The Kinect projects a pattern of infrared dots (invisible to the human eye) onto an object, which then reflect off the object and get captured by the device’s infrared sensor. The reflected dots create a 3D depth map. Nearby dots are linked together to create a triangular mesh grid of the object. The surface of each triangle in the grid is then filled in with texture and color information from the Kinect’s color camera. A scan is taken 10 times per second and data from thousands of scans are combined in real-time, yielding a 3D model of the original object or person.

The Kinect streams data at 40 megabytes per second. Keeping the amount of stored data to a minimum will allow a scan of a person to occupy only a few hundred kilobytes of storage, about the same as a picture taken with a digital camera.

Another advantage of the Kinect is cost: It retails for $150. This low price tag, coupled with the researcher’s efforts to make it a portable self-contained, battery-powered instrument with an onboard screen to monitor scan progress, makes it feasible to send an ArKinect to Jordan.

Source | Kurzweil AI

Transmitting high-speed data via LED room lights

Sunday, August 7th, 2011

Scientists from the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute (HHI) in Berlin have developed a new high-speed data transmission technology for video data.

In the future, high-speed video data may be sent to laptops via LEDs

Using an optical WLAN, the scientists were able to transfer data at a rate of 100 megabits per second over a ten square meters area without any loss, by modulating white LEDs in the ceiling.

The scientists were able to transfer four videos at HD quality to four different laptops at the same time. A simple photodiode on the laptop or other devices acts as a receiver.One disadvantage is that when something gets between the light and the photodiode, the transfer is impaired.

The new transmission technology is suitable for hospitals, where high data rates are required, but radio transmissions are not allowed — it could allow for controlling wireless surgical robots or sending x-ray images. In airplanes, passenger could view their own entertainment program on a display, saving aircraft manufacturers the cost and weight of miles of cables.

The HHI scientists will showcase the technology at the International Telecommunications Fair IFA (Internationale Funkausstellung IFA) in Berlin from September 2–7, 2011.

Source | Kurzweil AI

New IEEE standard allows for broadband wireless access 100 km from a transmitter

Tuesday, August 2nd, 2011

The IEEE has published the IEEE 802.22 standard to provide broadband access to wide regional areas around the world and bring reliable and secure high-speed communications to under-served and un-served communities.

This new standard for Wireless Regional Area Networks (WRANs) takes advantage of the favorable transmission characteristics of the VHF and UHF TV bands to provide broadband wireless access over a large area, typically up to 100 kilometers from the transmitter.

Each WRAN will deliver up to 22 Mbps per channel without interfering with reception of existing TV broadcast stations, using the “white spaces” between the occupied TV channels. This technology is especially useful for serving less densely populated areas, such as rural areas, and developing countries, where most vacant TV channels can be found.

IEEE 802.22 incorporates advanced cognitive radio capabilities, including dynamic spectrum access, incumbent database access, accurate geolocation techniques, spectrum sensing, regulatory domain dependent policies, spectrum etiquette, and coexistence for optimal use of the available spectrum.

Source | Kurzweil AI

New tools accelerate mapping the brain’s connectome

Friday, July 29th, 2011

New software tools to reconstruct neural wiring diagrams quickly and accurately have been developed by researchers at the Max Planck Institute for Medical Research to allow neuroscientists to understand the structure of the brain’s circuits — the connectome.

A reconstruction of 114 rod bipolar nerve cells from a piece of mouse retina. The dense bundles (top) are dendrites, and the sparser processes below are axons

The researchers created two new computer programs, KNOSSOS (named for Crete’s legendary palace, renowned for its elaborate labyrinth) and RESCOP, and mapped a network of 114 neurons from a mouse retina faster and more accurately than with previous methods.

The researchers started by staining the neurons of a section of tissue with heavy metals to make them visible. Using three-dimensional electron microscope images, they started at the cell body and followed the dendrites and axons, marking the branch point nodes on the screen. Then they used a computer to generate a three-dimensional image of the section.

The KNOSSOS software is about 50 times faster than other programs in tracing connections between neurons. The RESCOP program allows dozens of people to work on the reconstruction at the same time and allows for error detection and reduction.

With some 70 billion neurons, each neuron linked to about a thousand others via dendrites and axons, and hundreds of thousands of kilometers of circuits, the human brain is so complex that for many years, it seemed impossible to reconstruct the network in detail, the researchers said.

Dendrites form dense bundles where bipolar cells receive signals from rod photoreceptors (gray spheres)

One person working alone with the currently available programs would take at least 30 years to reconstruct a path of just 30 centimeters in length, they estimate. Besides, these procedures are prone to error, since the branch points are not always easily recognized and the annotator’s attentiveness decreases with time.

Source | Kurzweil AI

Touchscreen keyboard morphs to fit your typing style

Tuesday, July 26th, 2011

Typing on a touchscreen is not one of life’s pleasures: the one-size-fits-all nature of most virtual keyboards is a hassle that puts many of us off using them. I’ve lost count of the number of times I’ve seen journalists put down an iPad, for instance, and pick up a laptop or netbook to do some serious notetaking or writing.

IBM, however, says it doesn’t have to be that way. In a recently filed US patent application, three IBM engineers posit the notion of a virtual keyboard in which the position of the keys and the overall layout is entirely set by the user’s finger anatomy. That way, they argue, people will be better able to type at speed, with all keys within comfortable range and so end up, with fewer errors.

After an initial calibration stage, in which the keyboard asks users to undertake a series of exercises to set response time, anatomical algorithms get to work, sensing through the touchscreen the finger skin touch area, finger size and finger position  for the logged in user.

As this information is gathered – IBM does not say over what period this learning takes place – the virtual key buttons are automatically resized, reshaped and repositioned in response.

The patent shows a keyboard with some keys subtly higher than others, and with some fatter than others. This “adapts the keyboard to the user’s unique typing motion paths” governed by their different physical finger anatomies, says IBM, which suggests the idea being used in both touchscreen and projected “surface computing” displays.

There does seem scope for such ideas. In a review of the Apple iPad, review websiteMacInTouch said: “A touch typist found it frustratingly glitchy versus a real keyboard, producing all sorts of ghost characters when the screen repeatedly misinterpreted his fingers’ intentions.”

Perhaps anatomical profiling is just what’s needed.

Source | NewScientist

NSF funds $18.5 million effort to create mind-machine interfaces

Wednesday, July 20th, 2011

An $18.5 million grant to establish an Engineering Research Center for Sensorimotor Neural Engineering based at the University of Washington (UW) has been announced by the National Science Foundation .

Researchers will develop new technologies for amputees, for spinal cord injuries, and people with cerebral palsy, stroke, Parkinson’s disease, or age-related neurological disorders.

Scientists at the UW and partner institutions will work to perform mathematical analysis of the body’s neural signals, design and test implanted and wearable prosthetic devices, and build new robotic systems.

“The center will work on robotic devices that interact with, assist and understand the nervous system,” said director Yoky Matsuoka, a UW associate professor of computer science and engineering at UW. “It will combine advances in robotics, neuroscience, electromechanical devices, and computer science to restore or augment the body’s ability for sensation and movement.”

Source | Kurzweil AI

The Biological Canvas

Tuesday, July 19th, 2011

Curatorial Statement

The Biological Canvas parades a group of hand selected artists who articulate their concepts with body as the primary vessel.  Each artist uses body uniquely, experimenting with body as the medium: body as canvas, body as brush, and body as subject matter.  Despite the approach, it is clear that we are seeing new explorations with the body as canvas beginning to emerge as commonplace in the 21st century.

There are reasons for this refocusing of the lens or eye toward body.  Living today is an experience quite different from that of a century, generation, decade, or (with new versions emerging daily) even a year ago.  The body truly is changing, both biologically and technologically, at an abrupt rate.  Traditional understanding of what body, or even what human, can be defined as are beginning to come under speculation.  Transhuman, Posthuman, Cyborg, Robot, Singularity, Embodiment, Avatar, Brain Machine Interface, Nanotechnology …these are terms we run across in media today.  They are the face of the future – the dictators of how we will come to understand our environment, biosphere, and selves.  The artists in this exhibition are responding to this paradigm shift with interests in a newfound control over bodies, a moment of self-discovery or realization that the body has extended out from its biological beginnings, or perhaps that the traditional body has become obsolete.

We see in the work of Orlan and Stelarc that the body becomes the malleable canvas.  Here we see some of the earliest executions of art by way of designer evolution, where the artist can use new tools to redesign the body to make a statement of controlled evolution.  In these works the direct changes to the body open up to sculpting the body to be better suited for today’s world and move beyond an outmoded body.  Stelarc, with his Ear on Arm project specifically attacks shortcomings in the human body by presenting the augmented sense that his third ear brings.  Acting as a cybernetic ear, he can move beyond subjective hearing and share that aural experience to listeners around the world.  Commenting on the practicality of the traditional body living in a networked world, Stelarc begins to take into his own hands the design of networked senses.  Orlan uses her surgical art to conceptualize the practice Stelarc is using – saying that body has become a form that can be reconfigured, structured, and applied to suit the desires of the mind within that body.  Carnal Art, as Orland terms it, allows for the body to become a modifiable ready-made instead of a static object born out of the Earth.  Through the use of new technologies human beings are now able to reform selections of their body as they deem necessary and appropriate for their own ventures.

Not far from the surgical work of Orlan and Stelarc we come to Natasha Vita-More’s Electro 2011, Human Enhancement of Life Expansion, a project that acts as a guide for advancing the biological self into a more fit machine.  Integrating emerging technologies to build a more complete human, transhuman, and eventual posthuman body, Vita-More strives for a human-computer interface that will include neurophysiologic and cognitive enhancement that build on longevity and performance.  Included in the enhancement plan we see such technologies as atmospheric sensors, solar protective nanoskin, metabrain error correction, and replaceable genes.  Vita-More’s Primo Posthuman is the idealized application of what artists like Stelarc and Orlan are beginning to explore with their own reconstructive surgical enhancements.

The use of body in the artwork of Nandita Kumar’s Birth of Brain Fly and Suk Kyoung Choi + Mark Nazemi’s Corner Monster reflect on how embodiment and techno-saturation are having psychological effects on the human mind.  In each of their works we travel into the imagined world of the mind, where the notice of self, identity, and sense of place begin to struggle to hold on to fixed points of order.  Kumar talks about her neuroscape continually morphing as it is placed in new conditions and environments that are ever changing.  Beginning with an awareness of ones own constant programming that leads to a new understanding of self through love, the film goes on a journey through the depths of self, ego, and physical limitations.  Kumar’s animations provide an eerie journey through the mind as viewed from the vantage of an artist’s creative eye, all the while postulating an internal neuroscape evolving in accordance with an external electroscape. Corner Monster examines the relationship between self and others in an embodied world.  The installation includes an array of visual stimulation in a dark environment.  As viewers engage with the world before them they are hooked up simultaneously (two at a time) to biofeedback sensors, which measure an array of biodata to be used in the interactive production of the environment before their eyes.  This project surveys the psychological self as it is engrossed by surrounding media, leading to both occasional systems of organized feedback as well as scattered responses that are convolutions of an over stimulated mind.

Marco Donnarumma also integrates a biofeedback system in his work to allow participants to shape musical compositions with their limbs.  By moving a particular body part sounds will be triggered and volume increased depending on the pace of that movement.  Here we see the body acting as brush; literally painting the soundscape through its own creative motion.  As the performer experiments with each portion of their body there is a slow realization that the sounds have become analogous for the neuro and biological yearning of the body, each one seeking a particular upgrade that targets a specific need for that segment of the body.  For instance, a move of the left arm constantly provides a rich vibrato, reminding me of the sound of Vita-More’s solar protective nanoskin.

Our final three artists all use body in their artwork as components of the fabricated results, acting like paint in a traditional artistic sense.  Marie-Pier Malouin weaves strands of hair together to reference genetic predisposal that all living things come out of this world with.  Here, Malouin uses the media to reference suicidal tendencies – looking once again toward the fragility of the human mind, body and spirit as it exists in a traditional biological state.  The hair, a dead mass of growth, which we groom, straighten, smooth, and arrange, resembles the same obsession with which we analyze, evaluate, dissect and anatomize the nature of suicide.  Stan Strembicki also engages with the fragility of the human body in his Body, Soul and Science. In his photographic imagery Strembicki turns a keen eye on the medical industry and its developments over time.  As with all technology, Strembicki concludes the medical industry is one we can see as temporally corrective, gaining dramatic strides as new nascent developments emerge.  Perhaps we can take Tracy Longley-Cook’s skinscapes, which she compares to earth changing landforms of geology, ecology and climatology as an analogy for our changing understanding of skin, body and self.  Can we begin to mold and sculpt the body much like we have done with the land we inhabit?

There is a tie between the conceptual and material strands of these last few works that we cannot overlook: memento mori.  The shortcomings and frailties of our natural bodies – those components that artists like Vita-More, Stelarc, and Orlan are beginning to interpret as being resolved through the mastery of human enhancement and advancement.  In a world churning new technologies and creative ideas it is hard to look toward the future and dismiss the possibilities.  Perhaps the worries of fragility and biological shortcomings will be both posed and answered by the scientific and artistic community, something that is panning out to be very likely, if not certain.  As you browse the work of The Biological Canvas I would like to invite your own imagination to engage.  Look at you life, your culture, your world and draw parallels with the artwork – open your own imaginations to what our future may bring, or, perhaps more properly stated, what we will bring to our future.

Patrick Millard

Source | VASA Project

Control your home with thought alone

Monday, July 11th, 2011

TWO friends meet in a bar in the online environment Second Life to chat about their latest tweets and favourite TV shows. Nothing unusual in that – except that both of them have Lou Gehrig’s disease, otherwise known as amyotrophic lateral sclerosis (ALS), and it has left them so severely paralysed that they can only move their eyes.

These Second Lifers are just two of more than 50 severely disabled people who have been trying out a sophisticated new brain-computer interface (BCI). Second Life has been controlled using BCIs before, but only to a very rudimentary level. The new interface, developed by medical engineering company G.Tec of Schiedlberg, Austria, lets users freely explore Second Life’s virtual world and control their avatar within it.

It can be used to give people control over their real-world environment too: opening and closing doors, controlling the TV, lights, thermostat and intercom, answering the phone, or even publishing Twitter posts.

The system was developed as part of a pan-European project called Smart Homes for All, and is the first time the latest BCI technology has been combined with smart-home technology and online gaming. It uses electroencephalograph (EEG) caps to pick up brain signals, which it translates into commands that are relayed to controllers in the building, or to navigate and communicate within Second Life and Twitter.


In the past, one of the problems with BCIs has been their reliability, and they have tended to be limited in the number of functions that can be controlled at once, says John Gan of the BCI group at the University of Essex, UK. Like most BCI systems, G.Tec’s interface exploits an involuntary increase in a brain signal called P300 that occurs in response to an unexpected event.

To activate a command, the user focuses their attention on the corresponding icon on a screen, such as “Lights On”, while the EEG cap records their P300. The icons are flashed randomly, one at a time, and it is possible to tell which icon they are looking at by correlating a spike in the P300 with the timing of when that icon flashes, says Guenter Edlinger, G.Tec’s CEO. He will be presenting the system at the Human and Computer Interaction International conference in Orlando, Florida, this month.

G.Tec’s system works better, the more functions are added. That is because when there are more icons on the screen, it comes as a bigger surprise when the target icon flashes, creating a stronger P300 response. More than 40 icons can be displayed at once and submenus make it possible to add even more options.

G.Tec’s system has been tested at the Santa Lucia Foundation Hospital in Rome, Italy. “BCIs are definitely beginning to make the transition out of the lab,” says Ricardo Chavarriaga, a BCI researcher at the Swiss Federal Institute of Technology in Lausanne.

G.Tec says it is working on adding wheelchair control as a function, to help give users more mobility. “The point is that they can start making their own decisions,” says Edlinger.

Source | NewScientist

Massive botnet ‘indestructible,’ say researchers

Sunday, July 10th, 2011

Computerworld - A new and improved botnet that has infected more than four million PCs is “practically indestructible,” security researchers say.

“TDL-4,” the name for both the bot Trojan that infects machines and the ensuing collection of compromised computers, is “the most sophisticated threat today,” said Kaspersky Labs researcher Sergey Golovanov in a detailed analysis Monday.

“[TDL-4] is practically indestructible,” Golovanov said.

Others agree.

“I wouldn’t say it’s perfectly indestructible, but it is pretty much indestructible,” said Joe Stewart, director of malware research at Dell SecureWorks and an internationally-known botnet expert, in an interview today. “It does a very good job of maintaining itself.”

Golovanov and Stewart based their judgments on a variety of TDL-4′s traits, all which make it an extremely tough character to detect, delete, suppress or eradicate.

For one thing, said Golovanov, TDL-4 infects the MBR, or master boot record, of the PC with a rootkit — malware that hides by subverting the operating system. The master boot record is the first sector — sector 0 — of the hard drive, where code is stored to bootstrap the operating system after the computer’s BIOS does its start-up checks.

Because TDL-4 installs its rootkit on the MBR, it is invisible to both the operating system and more, importantly, security software designed to sniff out malicious code.

But that’s not TDL-4′s secret weapon.

What makes the botnet indestructible is the combination of its advanced encryption and the use of a public peer-to-peer (P2P) network for the instructions issued to the malware by command-and-control (C&C) servers.

“The way peer-to-peer is used for TDL-4 will make it extremely hard to take down this botnet,” said Roel Schouwenberg, senior malware researcher at Kaspersky, in an email reply Tuesday to follow-up questions. “The TDL guys are doing their utmost not to become the next gang to lose their botnet.”

Schouwenberg cited several high-profile botnet take-downs — which have ranged from a coordinated effort that crippled Conficker last year to 2011′sFBI-led take-down of Coreflood — as the motivation for hackers to develop new ways to keep their armies of hijacked PCs in the field.

“Each time a botnet gets taken down it raises the bar for the next time,” noted Schouwenberg. “The truly professional cyber criminals are watching and working on their botnets to make them more resilient against takedowns or takeovers.”

TDL-4′s makers created their own encryption algorithm, Kaspersky’s Golovanov said in his analysis, and the botnet uses the domain names of the C&C servers as the encryption keys.

The botnet also uses the public Kad P2P network for one of its two channels for communicating between infected PCs and the C&C servers, said Kaspersky. Previously, botnets that communicated via P2P used a closed network they had created.

By using a public network, the criminals insure their botnet will survive any take-down effort.

“Any attempt to take down the regular C&Cs can effectively be circumvented by the TDL group by updating the list of C&Cs through the P2P network,” said Schouwenberg. “The fact that TDL has two separate channels for communications will make any take-down very, very tough.”

Kaspersky estimated that the TDL-4 botnet consists of more than 4.5 million infected Windows PCs.

TDL-4′s rootkit, encryption and communication practices, as well as its ability to disable other malware, including the well-known Zeus, makes the botnet extremely durable. “TDL is a business, and its goal is to stay on PCs as long as possible,” said Stewart, citing the technologies that make the botnet nearly impossible to knock offline.

Stewart wasn’t shocked that the TDL-4 botnet numbers millions of machines, saying that its durability contributed to its large size.

“The 4.5 million is not surprising at all,” Stewart said. “It might not have as high an infection rate as other botnets, but its longevity means that as long as they can keep infecting computers and the discovery rate is small, they’ll keep growing it.”

Stewart pointed out that TDL-4′s counter-attacks against other malware was another reason it’s so successful.

“That’s so smart,” he said, adding that disabling competing malware — which likely is much easier to detect — means it has an even better chance of remaining on the PC. If other threats cause suspicious behavior, the machine’s owner may investigate, perhaps run additional security scans or install antivirus software.

TDL-4′s makers use the botnet to plant additional malware on PCs, rent it out to others for that purpose and for distributed denial-of-service (DDoS) attacks, and to conduct spam and phishing campaigns. Kaspersky said TDL-4 has installed nearly 30 different malicious programs on the PCs it controls.

But it’s able to remove any at will. “TDL-4 doesn’t delete itself following installation of other malware,” said Golovanov. “At any time [it] can … delete malware it has downloaded.”

This is one dangerous customer, Stewart concluded.

“For all intents and purposes, [TDL-4] is very tough to remove,” Stewart said. “It’s definitely one of the most sophisticated botnets out there.”

Source | ComputerWorld

Volkswagen Shows Off Self-Driving Auto Pilot Technology For Cars

Saturday, July 9th, 2011

While most automakers try to fix the problems with today’s techVolkswagen is working on tomorrow’s. The future of driving, in major cities at least, is looking more and more likely to be done by high-tech computers rather than actual people, at least if the latest breakthroughs in self-driving vehicle technology mean anything. Internet search engine giant Google has logged some 140,000 miles with its self-driving Toyota Prius fleet and Audi has had similar success with its run of autonomous cars.

Now Volkswagen has presented its ‘Temporary Auto Pilot’ technology. Monitored by a driver, the technology can allow a car to drive semi-automatically at speeds of up to 80 mph on highways.

It works using a combination of existing technology such as adaptive cruise control and lane-keeping assist, rolling them all into one comprehensive function. Nonetheless, the driver always retains driving responsibility and is always in control, and must continually monitor it. In this way, Volkswagen only sees it as a stepping stone towards what seems like an eventual future where nobody will be doing any driving.

In the semi-automatic driving mode, the system maintains a safe distance to the vehicle ahead, drives at a speed selected by the driver, reduces this speed as necessary before a bend, and maintains the vehicle’s central position with respect to lane markers. The system also observes overtaking rules and speed limits. Additionally, stop and start driving maneuvers in traffic jams are also automated.

The good news–or bad, depending on how you look at it–is that compared to the more advanced autonomous driving technologies, Volkswagen’s latest Temporary Auto Pilot is based on a relatively production-like sensor platform, consisting of production-level radar-, camera-, and ultrasonic-based sensors supplemented by a laser scanner and an electronic horizon.

This means that we could see a production version within the next couple of years.

Temporary Auto Pilot in action

Source | Motor Authority

Intel plans exaFLOP/s supercomputer by 2018

Saturday, July 9th, 2011

Intel plans to achieve exaFLOP performance (one quintillion computer operations per second) by the end of this decade, according to Kirk Skaugen, Intel Corporation vice president and general manager of the Data Center Group.

The performance of the TOP500 #1 system is estimated to reach 100 PetaFLOP/s in 2015 and break the barrier of 1 ExaFLOP/s in 2018. By the end of the decade the fastest system on Earth is forecasted to be able to provide performance of more than 4 ExaFLOP/s, according to Intel.

Managing the explosive growth in the amount of data shared across the Internet, finding solutions to climate change, managing the growing costs of accessing resources such as oil and gas, and a multitude of other challenges require increased amounts of computing resources that only increasingly high-performing supercomputers can address, Skaugen said.

Intel’s relentless pursuit of Moore’s Law — doubling the transistor density on microprocessors roughly every 2 years to increase functionality and performance while decreasing costs — combined with an innovative, highly efficient software programming model and extreme system scalability are key ingredients for crossing the threshold of petascale computing into a new era of exascale computing, according to Intel.

With this increase in performance, though, comes a significant increase in power consumption. As an example, for today’s fastest supercomputer in China, the Tianhe-1A, to achieve exascale performance, it would require more than 1.6 GW of power – an amount large enough to supply electricity to 2 million homes – thus presenting an energy efficiency challenge.

To address this challenge, Intel and European researchers have established three European labs. One of their technical goals is to create simulation applications that begin to address the energy efficiency challenges of moving to exascale performance.

The company outlined its vision at the International Supercomputing Conference, which showcased Intel’s latest work in its Many Integrated Core (MIC) architecture.

Source | Kurzweil AI

Daniel Kraft: medicine’s future? There’s an app for that

Tuesday, July 5th, 2011