25 March 2009

VS-GAMES2009 Article

Today, a paper with title ‘A Pervasive Augmented Reality Serious Game’ was presented at the 1st IEEE International Conference in Games and Virtual Worlds for Serious Applications at Coventry University. Pervasive games can sometimes have an educational aspect. The whole idea of playability in pervasive games is the player’s interaction with the physical reality. Accessibility space that is the key to the oscillation between embedded and tangible information and augmented reality interfaces have the potential of enhancing ubiquitous environments by allowing necessary information to be visualized in a number of different ways depending on the user needs. However, only a few applications combined them together with trying to graphically represent sensor information in real-time performance.

This paper presents a pervasive augmented reality serious game that can be used to enhance entertainment using a multimodal tracking interface. The main objective of the research is to design and implement generic pervasive interfaces that are user-friendly and can be used by a wide range of users including people with disabilities. A pervasive AR racing game has been designed and implemented. The goal of the game is to start the car and move around the track without colliding with either the wall or the objects that exist in the gaming arena. Users can interact using a pinch glove, a Wiimote, through tangible ways as well as through I/O controls of the UMPC. Initial evaluation results showed that multimodal-based interaction games can be beneficial in serious games.

A draft version of the paper can be downloaded from here and a video from here.

22 March 2009

Virtual Breath

The MRI machines and CAT scans, blood analyses and gene sequencing tools that are used to help diagnose our illnesses rely on advanced computing to extract knowledge out of molecular markers and reflected laser beams. A multidisciplinary team is working to develop a new tool to image, understand, and diagnose how air flows through the thousands of branching passageways of the lung, and how abnormalities can lead to illness. The approach to understanding the airflow and particle transport in the human lungs is quite novel based on computed tomography [CT] images to construct realistic human lung models, and then use computational fluid dynamics models to simulate the airflow through the lung. The computer simulations combine 20 years of experience modeling turbulence and computational fluid dynamics (CFD) with cutting-edge medical imaging technologies to create a framework that will help doctors understand what causes asthma, how exposure to environmental pollutants alter the development of children’s lungs, and how the addition of helium to aerosol drugs can make pharmaceuticals more effective. This image shows a computerized representation of a subject-specific breathing human lung. The airflow in the CT-based airway tree is simulated on TACC supercomputers by the multiscale 3D-1D coupled CFD technique.

The system is not only a theoretical project. With a patent pending, and tools created by the group recently approved by the U.S. Food and Drug Administration for clinical use, their research on TACC’s supercomputers will impact how doctors explore pulmonary problems in the near future. Multi-scale modeling is invaluable not only for simulating pulmonary airflow, but also for other physiological systems, including blood flow throughout the body. The data derived from researcher’s framework, because it is based on actual high-fidelity CT scans, is subject-specific, describing the lungs of a particular individual. This is critical for treatment purposes, and helps doctors explore diseases like asthma by comparing the airways of an asthmatic patient to a healthy subject. In addition to developing their novel computational framework, researchers are applying their system to a number of pressing biomedical questions where a realistic airway model is needed to derive genuine insights. For instance, drug-makers have long believed that mixing helium with drug aerosols can increase the effectiveness of certain pharmaceuticals, but they weren’t sure of the cause.

More information:


20 March 2009

Augmented Reality Under Water

The Fraunhofer Institute for Applied Information Technology FIT just presented an Augmented Reality system for use under water. A diver's mask with a special display lets the diver see his or her real submarine surroundings overlaid with computer-generated virtual scenes. Augmented Reality research has made enormous progress in the last few years, creating many exciting, albeit land-based applications. Now, FIT researchers are the first to demonstrate an AR application designed for underwater use. Submerged use is a major challenge for technical systems. They must be waterproof and robust enough to withstand the high additional pressure of increasing diving depth. FIT researchers built a prototype AR system that meets these requirements. Its main component is a waterproof display in front of a diver's mask. The display lets the diver see his or her real underwater environment plus additional virtual objects. Thus, a run-of-the-mill indoor pool may be visually upgraded to a (virtual) coral reef with shoals, mussels and weeds.

An ultra-mobile PC (UMPC), which the diver takes with him in a backpack, detects underwater markers in the video stream from a camera on the top of the diver's mask. Based on the pictures from the camera and on the data from inertial and magnetic field tracking of the diver's orientation, the system generates visually correct representations of the virtual 3D scenes. As a demonstrator Fraunhofer FIT created the world's first mobile underwater AR game. It puts the diver in the role of an underwater archaeologist searching for a treasure chest. The playground consists of six virtual 'islands' on the sea bed, each with its specific rich marine wildlife. In one of the underwater locations the treasure chest can be found, but it then takes a code number to open the lock. The elements of this number can be found in 'magical' mussels that hide in the other five locations. The user interface of this novel underwater game is highly intuitive and optimized for the swimming and diving player: It works without any manual interaction devices.

More information:


18 March 2009

Brain on a Chip

How does the human brain run itself without any software? Find that out, say European researchers, and a whole new field of neural computing will open up. A prototype ‘brain on a chip’ is already working. The EU-supported FACETS project brings together scientists from 15 institutions in seven countries to do just that. Inspired by research in neuroscience, they are building a ‘neural’ computer that will work just like the brain but on a much smaller scale.The human brain is often likened to a computer, but it differs from everyday computers in three important ways: it consumes very little power, it works well even if components fail, and it seems to work without any software. A team within FACETS is completing an exhaustive study of brain cells – neurons – to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. Meanwhile, another FACETS group is developing simplified mathematical models that will accurately describe the complex behaviour that is being uncovered. Although the neurons could be modelled in detail, they would be far too complicated to implement either in software or hardware.

The goal is to use these models to build a ‘neural computer’ which emulates the brain. The first effort is a network of 300 neurons and half a million synapses on a single chip. The team used analogue electronics to represent the neurons and digital electronics to represent communications between them. It’s a unique combination. Since the neurons are so small, the system runs 100,000 times faster than the biological equivalent and 10 million times faster than a software simulation. The network is already being used by FACETS researchers to do experiments over the internet without needing to travel to Heidelberg. But this ‘stage 1’ network was designed before the results came in from the mapping and modelling work. Now the team are working on stage 2, a network of 200,000 neurons and 50 million synapses that will incorporate all the neuroscience discoveries made so far. To build it, the team is creating its network on a single 20cm silicon disk, a ‘wafer’, of the type normally used to mass-produce chips before they are cut out of the wafer and packaged. This approach will make for a more compact device. So called ‘wafer-scale integration’ has not been used much before for this, as such a large circuit will certainly have manufacturing flaws.

More information:


14 March 2009

Robot Responds to Human Gestures

Imagine a day when you turn to your own personal robot, give it a task and then sit down and relax, confident that your robot is doing exactly what you wanted it to do. So far, that autonomous, do-it-all robot is the stuff of science fiction or cartoons like ‘The Jetsons’. But a Brown University-led robotics team has made an important advance: The group has demonstrated how a robot can follow nonverbal commands from a person in a variety of environments — indoors as well as outside — all without adjusting for lighting. They have created a novel system where the robot will follow the user at a precise distance. A video that shows the robot following gestures and verbal commands can be found in the Brown University release. The team also successfully instructed the robot to turn around (a 180-degree pivot) and to freeze when the student disappeared from view — essentially idling until the instructor reappeared and gave a nonverbal or verbal command. The Brown team started with a PackBot, a mechanized platform developed by iRobot that has been used widely by the U.S. military for bomb disposal, among other tasks. The researchers outfitted their robot with a commercial depth-imaging camera. They also geared the robot with a laptop that included novel computer programs that enabled the machine to recognize human gestures, decipher them and respond to them.

The researchers made two key advances with their robot. The first involved what scientists call visual recognition. Applied to robots, it means helping them to orient themselves with respect to the objects in a room. Robots can see things, but recognition remains a challenge. The team overcame this obstacle by creating a computer program, whereby the robot recognized a human by extracting a silhouette, as if a person were a virtual cut-out. This allowed the robot to home in on the human and receive commands without being distracted by other objects in the space. The second advance involved the depth-imaging camera. The team used a CSEM Swiss Ranger, which uses infrared light to detect objects and to establish distances between the camera and the target object, and, just as important, to measure the distance between the camera and any other objects in the area. The distinction is key, because it enabled the Brown robot to stay locked in on the human commander, which was essential to maintaining a set distance while following the person. The result is a robot that doesn't require remote control or constant vigilance, which is a key step to developing autonomous devices. The team hopes to add more nonverbal and verbal commands for the robot and to increase the three-foot working distance between the commander and the robot.

More information:


11 March 2009

Serious Games for Health Problems

Gamers caught a very early glimpse of the future of serious games aimed at the health sector during the PlayMancer project’s demos at the latest Vienna Science Fair. The European PlayMancer project is working hard to improve the technology for serious games engines and tools for 3D networked gaming. It is possible to build actual games, serious games, around serious health-related problems like bulimia and chronic pain. Using gaming in this way is really breaking new ground. It is very early days for this EU-funded project but it is already demonstrating a flair for the sort of press relations it will need to develop this fledgling market for games geared towards more ‘serious’ goals than entertainment. Early technical prototypes developed alongside initial work by PlayMancer partners at the Technical University of Vienna were put through their paces by hundreds of visitors at the latest edition of the annual Vienna Science Fair. The project has released a YouTube video of the demos in action. The short film shows a cross-section of the community trying to manipulate virtual objects in a 3D variation of the old-school Pong game. However, it’s not just about developing the most fun and interactive games, or targeting particular groups.

One aim is to seriously improve the accessibility of games, making them playable by all kinds of people, including the disabled. This is where PlayMancer will need to be very innovative because the concept of developing games that are more universally accessible is still in its infancy. But you could never accuse the partners in this project of lacking vision. You get this even in the project’s name, which is a nod to William Gibson’s 1984 futuristic classic Neuromancer, widely considered the father of cyberpunk literature. The team behind the project come from a range of backgrounds in academia and industry in Austria, Greece, Italy, Spain and Switzerland. Their goal is to develop the games from the bottom up, with health and therapy embedded into their make-up. For example, people suffering from chronic pain could be playing games designed to ease their symptoms while their therapist monitors progress online. The therapist could interrupt the game any time to adjust the settings, or if there is an imminent health risk to the player. The market PlayMancer is aiming to enter when it ends late next year is underdeveloped. It falls under the umbrella of serious games, which though they are maturing, especially in business and training applications, are still by no means an easy market to break in to. PlayMancer is funded under the ICT strand of the Seventh Framework Programme for Research.

More information:



10 March 2009

Multi-Sensory Virtual Reality Headset

The first virtual reality headset that can stimulate all five senses will be unveiled at a major science event in London on March 4th. What was it really like to live in Ancient Egypt? What did the streets there actually look, sound and smell like? For decades, Virtual Reality has held out the hope that, one day, we might be able visit all kinds of places and periods as ‘virtual’ tourists. To date, though, Virtual Reality devices have not been able to stimulate simultaneously all five senses with a high degree of realism. But with funding from the Engineering and Physical Sciences Research Council (EPSRC), scientists from the Universities of York and Warwick believe they have been able to pinpoint the necessary expertise to make this possible, in a project called ‘Towards Real Virtuality’. ‘Real Virtuality’ is a term coined by the project team to highlight their aim of providing a ‘real’ experience in which all senses are stimulated in such a way that the user has a fully immersive perceptual experience, during which s/he cannot tell whether or not it is real.

Teams at York and Warwick now aim to link up with experts at the Universities of Bangor, Bradford and Brighton to develop the ‘Virtual Cocoon’ – a new Real Virtuality device that can stimulate all five senses much more realistically than any other current or prospective device. For the user the ‘Virtual Cocoon’ will consist of a headset incorporating specially developed electronics and computing capabilities. It could help unlock the full potential benefits of Real Virtuality in fields such as education, business and environmental protection. A key objective will be to optimise the way all five senses interact, as in real life. The team also aim to make the Virtual Cocoon much lighter, more comfortable and less expensive than existing devices, as a result of the improved computing and electronics they develop. There has been considerable public debate on health & safety as well as on ethical issues surrounding Real Virtuality, since this kind of technology fundamentally involves immersing users in virtual environments that separate them from the real world.

More information:


05 March 2009

Reconstruction of Ancient Epigonion

The ASTRA project, standing for Ancient instruments Sound/Timbre Reconstruction Application, has revived an instrument that hasn’t been played or heard in centuries. Using the Enabling Grids for E-sciencE infrastructure for computing power, a team based in Salerno and Catania, Italy, has reconstructed the “epigonion,” a harp-like, stringed instrument used in ancient Greece. With data from numerous sources, including pictures on urns, fragments from excavations and written descriptions, the team has been able to model what the instrument would have looked and sounded like.

Their model has become sophisticated enough to be used by musicians of the Conservatories of Music of Salerno and Parma in concerts. The idea and mathematical concepts behind this work is several decades old, the first attempts being made in 1971. Now with grid technology these researchers have the required computing power to recreate an ancient instrument that would previously have been too expensive and too difficult to manufacture by hand. Using grid computing also means that the data used and discovered during the research is easily available to other researchers, such as archaeologists and historians.

More information: