29 March 2010

VS-Games 2010 Article

Last Thursday, I have presented a paper with title ‘Randomly Generated 3D Environments for Serious Games’, which was co-authored with Jeremy Noghani and Dr. Eike Falk Anderson, to the 2nd IEEE International Conference in Games and Virtual Worlds for Serious Applications. The paper describes a variety of methods that can be used to create realistic, random 3D environments for serious games requiring real-time performance. These include the generation of terrain, vegetation and building structures. An interactive flight simulator has been created as proof of concept.

Initial results with two different types of user groups (remote and hallway) showed that overall the flight simulator is enjoyable, looks realistic for a gaming scenario and thus also has the potential to be used for the development of serious games. In the future, a classification regarding buildings and vegetation will be developed allowing for automatic random generation of larger urban environments. To improve the cognitive perception of the players, additional urban geometry will be generated automatically.

A draft version of the article can be downloaded from here.

23 March 2010

City Tour Navigation by Mouse Click

The residents of Stassfurt in Saxony-Anhalt are lucky. They are able to voice opinions about construction plans even before redevelopment commences. This is made possible by software from the Fraunhofer Institute for Factory Operation and Automation IFF in Magdeburg, which represents open lots, individual buildings, neighborhoods and even complete industrial parks true-to-scale and photorealistically in virtual 3D projections. The researchers from the Fraunhofer IFF have created a 3D scenario specifically for Stassfurt. Mining caused the downtown to sink. A lake with recreational zones was laid out in and around the pit. The bank zone is going to be expanded now and the residents of Stassfurt have already been able to stroll through the new layout virtually.

Researchers created the terrain model based on two-dimensional and additional elevation data from a laser scan. Afterwards they combined the lots with digital photographs taken on site and integrated the thusly created 3D building models in the virtual environment. Municipalities, cities and districts intend to use the new software for more than urban planning. The program will also be employed for regional marketing to attract potential investors for instance. On a virtual excursion, they may switch locations and their view of the 3D model (i.e. to an industrial park) at any time and interactively retrieve supplementary information on open lots, lot sizes, prices, maximum construction heights, soil quality and distance by mouse click.

More information:


21 March 2010


Using advanced tools to see the human brain at work, a new generation of marketing experts may be able to test a product's appeal while it is still being designed, according to a new analysis by two researchers at Duke University and Emory University. So-called neuromarketing takes the tools of modern brain science, like the functional MRI, and applies them to the somewhat abstract likes and dislikes of customer decision-making. Though this raises the specter of marketers being able to read people's minds, neuromarketing may prove to be an affordable way for marketers to gather information that was previously unobtainable, or that consumers themselves may not even be fully aware of, researchers mention at Duke.

Researchers offer tips on what to look for when hiring a neuromarketing firm, and what ethical considerations there might be for the new field. They also point to some words of caution in interpreting such data to form marketing decisions. Neuromarketing may never be cheap enough to replace focus groups and other methods used to assess existing products and advertising, but it could have real promise in gauging the conscious and unconscious reactions of consumers in the design phase of such varied products as food, entertainment, buildings and political candidates.

More information:


16 March 2010

Reading Human Memories

Computer programs have been able to predict which of three short films a person is thinking about, just by looking at their brain activity. The research, conducted by scientists at the Wellcome Trust Centre for Neuroimaging at UCL, provides further insight into how our memories are recorded. This study is an extension of work published last year which showed how spatial memories -- in that case, where a volunteer was standing in a virtual reality room -- are recorded in regular patterns of activity in the hippocampus, the area of the brain responsible for learning and memory. To explore how such memories are recorded, the researchers showed ten volunteers three short films and asked them to memorise what they saw. The films were very simple, sharing a number of similar features -- all included a woman carrying out an everyday task in a typical urban street, and each film was the same length, seven seconds long. For example, one film showed a woman drinking coffee from a paper cup in the street before discarding the cup in a litter bin; another film showed a (different) woman posting a letter.

The volunteers were then asked to recall each of the films in turn whilst iside an fMRI scanner, which records brain activity by measuring changes in blood flow within the brain. A computer algorithm then studied the patterns and had to identify which film the volunteer was recalling purely by looking at the pattern of their brain activity. Although a whole network of brain areas support memory, the researchers focused their study on the medial temporal lobe, an area deep within the brain believed to be most heavily involved in episodic memory it includes the hippocampus. Researchers found that the key areas involved in recording the memories were the hippocampus and its immediate neighbours. However, the computer algorithm performed best when analysing activity in the hippocampus itself, suggesting that this is the most important region for recording episodic memories. In particular, three areas of the hippocampus -- the rear right and the front left and front right areas -- seemed to be involved consistently across all participants. The rear right area had been implicated in the earlier study, further enforcing the idea that this is where spatial information is recorded. However, it is still not clear what role the front two regions play.

More information:


13 March 2010

BCI Reconstructs 3D Hand Movement

Researchers have successfully reconstructed 3D hand motions from brain signals recorded in a non-invasive way. This finding uses a technique that may open new doors for portable brain-computer interface systems. Such a non-invasive system could potentially operate a robotic arm or motorized wheelchair -- a huge advance for people with disabilities or paralysis. Until now, to reconstruct hand motions, researchers have used non-portable and invasive methods that place sensors inside the brain. In this research, a team of neuroscientists of the University of Maryland, College Park, placed an array of sensors on the scalps of five participants to record their brains' electrical activity, using a process called electroencephalography, or EEG. Volunteers were asked to reach from a center button and touch eight other buttons in random order 10 times, while the authors recorded their brain signals and hand motions. Afterward, the researchers attempted to decode the signals and reconstruct the 3D hand movements. Results showed that electrical brain activity acquired from the scalp surface carries enough information to reconstruct continuous, unconstrained hand movements.

The researchers found that one sensor in particular (of the 34 used) provided the most accurate information. The sensor was located over a part of the brain called the primary sensorimotor cortex, a region associated with voluntary movement. Useful signals were also recorded from another region called the inferior parietal lobule, which is known to help guide limb movement. The authors used these findings to confirm the validity of their methods. This study has implications for future brain-computer interface technologies and for those already in existence. It may eventually be possible for people with severe neuromuscular disorders, such as amyotrophic lateral sclerosis (ALS), stroke, or spinal cord injury, to regain control of complex tasks without needing to have electrodes implanted in their brains. The findings could also help improve existing EEG-based systems designed to allow movement-impaired people to control a computer cursor with just their thoughts. These systems now require that users undergo extensive training sessions.

More information:


11 March 2010

Touchless Sensor Finger Detection

Researchers form the Fraunhofer Institute are working as part of the EU 3Plast research consortium to develop sensors that can be printed onto plastic film and attached to objects so that, for example, electronic devices can be controlled just by pointing a finger. Rather than responding to a directly applied force or acceleration, the sensors react to tiny fluctuations in temperature and differences in pressure, thereby recognising a finger as it approaches. 3Plast, which stands for Printable pyroelectrical and piezoelectrical large area sensor technology, is a consortium that comprises companies and institutes from industry and research with the goal of mass-producing pressure and temperature sensors that can be cheaply printed onto plastic film and flexibly affixed to a wide range of everyday objects.

The sensor consists of pyroelectrical and piezoelectrical polymers which can now be processed in high volumes by screen printing, for example. The sensor is combined with an organic transistor, which strengthens the sensor signal. It is strongest where the finger is. The special thing about our sensor is that the transistor can also be printed. The production of polymer sensors still poses a number of challenges. To produce printable transistors, the insulation materials have to be very thin. The experts at the ISC have, however, succeeded in producing an insulator that is only 100nm thick; the first sensors have already been printed onto film. Currently the researchers are working on optimised transistors that can amplify rapid changes in temperature and pressure.

More information:


09 March 2010

Flying Pixel-Copters 3D Display

This may be the year that 3D television sets flood the market, but some engineers have turned to aircraft in search of a viewing experience that is still more immersive. Two teams at the Massachusetts Institute of Technology are working on a unique 3D display dubbed Flyfire, in which a flock of tiny aircraft carrying multicoloured LEDs hover in front of the viewer to form an image. As pixels that can move through space, the free-flying LEDs could form a shape-shifting 3D display.

As well as the pixels displaying moving images like a normal screen, they could change their position to add real depth. It's a 3D display with a dual aspect – it can show an image like a traditional display, but then those pixels can move and transform into another shape. The ultimate goal is to make a Flyfire display containing 1000 or more much smaller flying pixels. Within five years, the researchers plan to showcase their ongoing progress in an exhibition space.

More information:


08 March 2010

Game Trains Soldiers in Virtual Iraq

A training tool being developed by a research team from the Arts and Technology (ATEC) program may soon make it easier for military service men and women to perform their missions in Iraq and Afghanistan. The project offers virtual villages for soldiers in Iraq or Afghanistan to practice their training skills. The way some of that training has been done in the past and may still be done in certain areas is to build actual villages and hire actors to replicate a particular culture. That kind of approach has some limitations in the sense that it’s expensive, not everyone can attend, it’s not easily changed because it’s a physical structure, you have to work with actual actors, and so forth. The ATEC team set out to re-create a realistic virtual environment instead.

The result is First Person Cultural Trainer (FPCT), a 3D interactive game that teaches soldiers the values and norms of Iraqi and Afghan cultures. FPCT is a serious game, which means that it is designed for purposes other than pure entertainment, in this case, cultural training. Presented annually by the National Training & Simulation Association (NTSA), the Modeling & Simulation (M&S) Awards recognize achievement in the M&S functional areas of training, analysis and acquisition, and in support of the overall M&S effort. The project is supported and sponsored by the U.S. Army Training and Doctrine Command (TRADOC) G-2 Intelligence Support Activity (TRISA).

More information: