26 June 2013

SCUCA 2013 Paper

A few weeks ago, I presented a paper I co-authored with colleagues from the Interactive Worlds Applied Research Group (iWARG) and the KTH Royal Institute of Technology. It was presented at the International Workshop on Smart City and Ubiquitous Computing Applications(SCUCA 2013). The workshop took place at Madrid, Spain, 4th June 2013 in conjunction with the 14th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks.

The paper was titled ‘A Perceptual Study into the Behaviour of Autonomous Agents within a Virtual Urban Environment’. The paper presents the development of a crowd simulation that is based upon a real-life urban environment, which is then subjected to perceptual experimentation to identify features of behaviour which can be linked to perceived realism. Perceived realism is an important feature for entertainment and educational purposes.

A draft version of the paper can be downloaded from here.

23 June 2013

3D Atlas of Human Brain

A new resource will allow scientists to explore the anatomy of a single brain in three dimensions at far greater detail than before, a possibility its creators hope will guide the quest to map brain activity in humans. The resource, dubbed the BigBrain, was created as part of the European Human Brain Project and is freely available online for scientists to use. The researchers behind the BigBrain, at the Research Centre Jülich and the Heinrich Heine University Düsseldorf in Germany, imaged the brain of a healthy deceased 65-year-old woman using MRI and then embedded the brain in paraffin wax and cut it into 7,400 slices, each just 20 micrometers thick. Each slice was mounted on a slide and digitally imaged using a flatbed scanner.

Many slices had small rips, tears, and distortions, so the team manually edited the images to fix major signs of damage and then used an automated program for minor fixes. Guided by previously taken MRI images and relationships between neighboring sections, they then aligned the sections to create a continuous 3D object representing about a terabyte of data. Researchers mentioned that existing three-dimensional atlases of human brain anatomy are usually limited by the resolution of MRI images—about a millimeter. The BigBrain atlas, in contrast, makes it possible to zoom in to about 20 micrometers in each dimension. That’s not enough to analyze individual brain cells, but it makes it possible to distinguish how layers of cells are organized in the brain.

More information:

21 June 2013

Breaking Down Brain Function

With the help of TACC supercomputers, researchers create OpenfMRI project to enable data intensive analyses of the mind. Researchers of neuroscience at The University of Texas at Austin, bridge psychology, neuroscience and computer science to understand how the brain creates cognitive functions. fMRI machines map neuronal activity based on the blood oxygen levels in the brain. When a neuron is active, the brain sends extra oxygenated blood, which has a distinct magnetic signature. By recording these signatures at different locations in the brain, neuroscientists can connect and pinpoint various functions―and potentially dysfunctions―with amazing specificity. The age of telepathy is nearly upon us. Brain researchers can now identify what you are looking at using only fMRI scans of your brain. However, actions, motivations and feelings are still hard to identify. They created OpenfMRI, a web-based, supercomputer-powered workflow that makes it easier for researchers to process, share, compare and rapidly analyze brain scans from many different studies. 

Currently, the project has 18 datasets, consisting of data from almost 350 human subjects. The data comes primarily from four main partners―Stanford, Harvard, the University of Colorado and Washington University. The pipeline that researchers developed allows to automatically process, visualize and analyze raw fMRI data, using the powerful Lonestar supercomputer at the Texas Advanced Computing Center (TACC). When fMRI scans are taken, they contain a lot of noisy information that must be cleaned up. In the automated workflow, the supercomputer first determines what parts of the fMRI images represent brain tissue. Next, it computationally reconstructs the 3D surface of the brain based on structural images and projects the data from the fMRI scans onto that surface. Finally, it takes each subject's brain and warps it to correspond to the average brain so a researcher or doctor can ask, across a group of individuals, which areas are turning on during a specific activity. Each of these steps requires large-scale computational power, but they can be done quickly, using Lonestar.

More information:

17 June 2013

BCIs Make Tasks Easy

Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease. Now, University of Washington researchers have demonstrated that when humans use this technology – called a brain-computer interface – the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed. There’s a lot of engagement of the brain’s cognitive resources at the very beginning, but as you get better at the task, those resources aren’t needed anymore and the brain is freed up. In this study, seven people with severe epilepsy were hospitalized for a monitoring procedure that tries to identify where in the brain seizures originate. Physicians cut through the scalp, drilled into the skull and placed a thin sheet of electrodes directly on top of the brain.  While they were watching for seizure signals, the researchers also conducted this study. The patients were asked to move a mouse cursor on a computer screen by using only their thoughts to control the cursor’s movement. 

Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen. Researchers found that when patients started the task, a lot of brain activity was centered in the prefrontal cortex. But after often as little as 10 minutes, frontal brain activity lessened, and the brain signals transitioned to patterns similar to those seen during more automatic actions. While researchers have demonstrated success in using brain-computer interfaces in monkeys and humans, this is the first study that clearly maps the neurological signals throughout the brain. The researchers were surprised at how many parts of the brain were involved. Several types of brain-computer interfaces are being developed and tested. The least invasive is a device placed on a person’s head that can detect weak electrical signatures of brain activity. Basic commercial gaming products are on the market, but this technology isn’t very reliable yet because signals from eye blinking and other muscle movements interfere too much. A more invasive alternative is to surgically place electrodes inside the brain tissue itself to record the activity of individual neurons.

More information:

14 June 2013

Treating Trauma with VR

The University of Southern California's Institute for Creative Technologies is leading the way in creating virtual humans. The result may produce real help for those in need. The virtual therapist sits in a big armchair, shuffling slightly and blinking naturally, apparently waiting for me to get comfortable in front of the screen. The software allows a doctor to follow a patient's progress over time. It objectively and scientifically compares sessions.

The centre does a lot of work with the US military, which after long wars in Iraq and Afghanistan has to deal with hundreds of thousands of troops and veterans suffering from various levels of post-traumatic stress disorder. The whole lab is running experiments with virtual humans by blending a range of technologies such as movement sensing and facial recognition. In the lab's demonstration space a virtual soldier sits behind a desk and responds to a disciplinary scenario as part of officer training.

More information:

12 June 2013

Contact Lens Computer

For those who find Google Glass indiscreet, electronic contact lenses that outfit the user’s cornea with a display may one day provide an alternative. Built by researchers at several institutions, including two research arms of Samsung, the lenses use new nanomaterials to solve some of the problems that have made contact-lens displays less than practical. A group led by researchers at the Ulsan National Institute of Science and Technology, mounted a light-emitting diode on an off-the-shelf soft contact lens, using a material the researchers developed: a transparent, highly conductive, and stretchy mix of graphene and silver nanowires. The researchers tested these lenses in rabbits—whose eyes are similar in size to humans’—and found no ill effects after five hours. The animals didn’t rub their eyes or grow bloodshot, and the electronics kept working. 

They found that sandwiching silver nanowires between sheets of graphene yielded a composite with much lower electrical resistance than either material alone. The industry standard for a transparent conductor is a resistance of 50 ohms per square or less. Their material has a resistance of about 33 ohms per square. The material also transmits 94 percent of visible light, and it stretches. The researchers make these conductive sheets by depositing liquid solutions of the nanomaterials on a spinning surface, such as a contact lens, at low temperatures. Working with researchers at Samsung, they coated a contact lens with the stretchy conductor, and then placed a light-emitting diode on it. Although it would be an exaggeration to call this a display, since there is just one pixel, its possible this kind of material will be a necessary component in future contact-lens displays.

More information:

11 June 2013

Structure of Videogames

Research at the Universidad Carlos III of Madrid (UC3M) analyzes in depth the content of videogames and their interaction with the player. The study of this material shows the importance of this industry, which is experiencing exponential growth. The videogame business has experienced spectacular growth since Nintendo took the Christmas gift market by storm in the mid-eighties. This enormous industry is more than a moneymaking machine. Its volume is so large that it has reached the academic world, where its contents have become the subject of in-depth study.

Videogames can be analyzed by examining various aspects, such as the construction of the characters, the options that modify the virtual world and the scenarios. Initial conclusions of this research is that these are not stories themselves, but rather that they can generate distinct experiences keeping in mind the way in which each player understands those fictitious worlds and how they related to them. These relations are what incite players to immerse themselves in the games during their free time and may serve to help a sector that is constantly in search.

More information:

10 June 2013

Thought-Guided Helicopter

Researchers have harnessed the power of thought to guide a remote-control helicopter through an obstacle course. The demonstration joins a growing number of attempts to translate the electrical patterns of thoughts into motions in the virtual and real world. Applications range from assisting those with neurodegenerative disorders to novel modes of video game play. The approach, require that an electronic system be ‘trained’ to recognise patterns in an electroencephalograph. 

Those thoughts, such as that of making a fist with the left hand, are then correlated with motions of the helicopter - in this case to the left. The electroencephalograph remains a chaotic and largely indecipherable mess of electrical signals, but those related to motion - or the mere thought of it - have proven to be comparatively strong and repeatable. Even technology firms see potential in the idea; Samsung is reportedly working on a "mind-control" tablet device.

More information: