29 November 2012

Archeovirtual 2012 - Roma Nova

Recently, I have demonstrated a prototype brain-controlled serious game for cultural heritage, called RomaNova at Archeovirtual 2012 which took place at Paestum, Italy, 15-18 November 2012. The system was demonstrated for 2 consecutive days at a number of visitors and initial evaluation results were recorded. RomaNova project is a prototype system for cultural heritage based on brain computer interfaces for navigating and interacting with serious games. By analysing traditional human-computer interaction methods and paradigms with brain-controlled games, it is possible to investigate novel methods for interacting and perceiving virtual heritage worlds. An interactive serious cultural heritage game was developed based on commercial BCI headsets controlling virtual agents in the ancient city of Rome.

The interactive game is built upon Rome Reborn one of the most realistic 3D representations of Ancient Rome currently in existence. This 3D representation provides a high fidelity 3D digital model which can be explored in real-time. The aim of this game is to navigate an avatar inside virtual Rome and interact with intelligent agents while learning at the same time. Both navigation and interaction is performed using brain-wave technology. The Roma Nova project builds on previous work at Coventry University and aims at teaching history to young children but can also be applied for a wider audience. It allows for exploratory learning by immersing the learner/player inside a virtual heritage environment where they learn different aspects of history through their interactions with a crowd of virtual authentic Roman avatars.

More information:

24 November 2012

Reality Deck Visualization Display

The Reality Deck, a 416 screen super-high resolution virtual reality four-walled surround-view theater, is the largest resolution immersive display ever built driven by a graphic supercomputer. Its purpose and primary design principle is to enable scientists, engineers and physicians to tackle modern-age problems that require the visualization of vast amounts of data. The Reality Deck, constructed with a $1.4 million National Science Foundation (NSF) grant and a $600,000 match from Stony Brook University, is the first to break the one billion pixel mark with a resolution five times greater than the second largest in the world. This technology will be used for visualizing and analyzing big data.

In the Reality Deck, data is displayed with an unprecedented amount of resolution that saturates the human eye, provides 20/20 vision, and renders traditional panning or zooming motions obsolete, as users just have to walk up to a display in order to resolve the minutiae, while walking back in order to appreciate the context that completely surrounds them. Another feature of the Reality Deck is the infinite canvas, a 360-degree smart screen that changes images according to the location of the viewer walking around the Reality Deck, so the same image is never viewed twice and infinitely big data can be explored. Future applications to stream video in real time are also in the works.

More information:

22 November 2012

Brain, Internet, and Cosmology

The structure of the universe and the laws that govern its growth may be more similar than previously thought to the structure and growth of the human brain and other complex networks, such as the Internet or a social network of trust relationships between people. Having the ability to predict – let alone trying to control – the dynamics of complex networks remains a central challenge throughout network science. Structural and dynamical similarities among different real networks suggest that some universal laws might be in action, although the nature and common origin of such laws remain elusive.

By performing complex supercomputer simulations of the universe and using a variety of other calculations, researchers have now proven that the causal network representing the large-scale structure of space and time in our accelerating universe is a graph that shows remarkable similarity to many complex networks such as the Internet, social, or even biological networks. These findings have key implications for both network science and cosmology. Researchers discovered that the large-scale growth dynamics of complex networks and causal networks are asymptotically (at large times) the same, explaining the structural similarity between these networks.

More information:

21 November 2012

Simulations for Cancer Prevention

Carcinogens seem to be everywhere, from automobile exhaust to secondhand smoke. With cancer the second leading cause of death in the US, research into various carcinogens may be the first step in preventing cancer from developing. Now, with the help of researchers at NYU, HPC may give us the necessary tools to curb cancer development. High performance computing resources help researchers model those airborne cancerous chemicals, known as polycyclic aromatic hydrocarbon (PAH), and their effect on DNA strands in human cells.

Carcinogens, or chemicals that manipulate the DNA of cells in such a way that the cells replicate uncontrollably leading to tumors, can be broken into two categories. Some of these chemicals destabilize the actual DNA strands, making them easier to defend against. Others, however, actually create stronger bonds between the strands than there exists between normal DNA, making them particularly effective at propagating themselves. Modeling complex forces and interactions is no small feat, as it was necessary to garner the coordinates of the structures over a period of time.

More information:

20 November 2012

Brain-Controlled Computer Cursors

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement still activate as if trying to make the immobile limb work again. Despite neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional. In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons, and after passing those signals through a mathematical decode algorithm, can use them to control computer cursors with thoughts. The work is part of a field known as neural prosthetics. A team of Stanford researchers have now developed an algorithm, known as ReFIT, which vastly improves the speed and accuracy of neural prosthetics that control computer cursors. In side-by-side demonstrations with rhesus monkeys, cursors controlled by the ReFIT algorithm doubled the performance of existing systems and approached performance of the real arm. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time. The system relies on a silicon chip implanted into the brain, which records ‘action potentials’ in neural activity from an array of electrode sensors and sends data to a computer.

The frequency with which action potentials are generated provides the computer key information about the direction and speed of the user’s intended movement. The ReFIT algorithm that decodes these signals represents a departure from earlier models. In most neural prosthetics research, scientists have recorded brain activity while the subject moves or imagines moving an arm, analyzing the data after the fact. The Stanford team wanted to understand how the system worked ‘online’, under closed-loop control conditions in which the computer analyzes and implements visual feedback gathered in real time as the monkey neurally controls the cursor to toward an onscreen target. The system is able to make adjustments on the fly when while guiding the cursor to a target, just as a hand and eye would work in tandem to move a mouse-cursor onto an icon on a computer desktop. If the cursor were straying too far to the left, for instance, the user likely adjusts their imagined movements to redirect the cursor to the right. The team designed the system to learn from the user’s corrective movements, allowing the cursor to move more precisely than it could in earlier prosthetics. To test the new system, the team gave monkeys the task of mentally directing a cursor to a target — an onscreen dot — and holding the cursor there for half a second. ReFIT performed vastly better than previous technology in terms of both speed and accuracy. The path of the cursor from the starting point to the target was straighter and it reached the target twice as quickly as earlier systems, achieving 75 to 85 percent of the speed of real arms.

More information: