29 October 2009

VR Reduces Tobacco Addiction

Smokers who crushed computer-simulated cigarettes as part of a psychosocial treatment program in a virtual reality environment had significantly reduced nicotine dependence and higher rates of tobacco abstinence than smokers participating in the same program who grasped a computer-simulated ball, according to a study described in the current issue of CyberPsychology and Behavior. Researchers from the GRAP Occupational Psychology Clinic, and the University of Quebec in Gatineau, randomly assigned 91 smokers enrolled in a 12-week anti-smoking support program to one of two treatment groups. In a computer-generated virtual reality environment, one group simulated crushing virtual cigarettes, while the other group grasped virtual balls during 4 weekly sessions.

The findings demonstrate a statistically significant reduction in nicotine addiction among the smokers in the cigarette-crushing group versus those in the ball-grasping group. Also, at week 12 of the program, the smoking abstinence rate was significantly higher for the cigarette-crushing group (15%) compared to the ball-grasping group (2%). Other notable findings include the following: smokers who crushed virtual cigarettes tended to stay in the treatment program longer (average time to drop-out > 8 weeks) than the ball-grasping group (< 6 weeks). At the 6-month follow-up, 39% of the cigarette crushers reported not smoking during the previous week, compared to 20% of the ball graspers.

More information:


28 October 2009

Immersive Bird’s-Eye View Exhibit

A new virtual environment was developed by Texas A&M University researchers. The system allows humans to see and hear some of the extreme ranges of vision and hearing that animals have could help reinvent the way museums teach about the natural world. Such immersive exhibits would allow visitors, for example, the chance to experience birds’ ultraviolet vision or whales’ ultrasonic hearing. Participants at the international Siggraph conference had the opportunity to experience the program, titled “I’m Not There,” by donning 3D glasses and using a Wii controller to navigate through the exhibit.

The Viz lab is about the synthesis between art and science, so we inserted artistic elements into these scenes to make them more realistic and interesting. Researchers take ultra- or infrasonic sound and ultraviolet and infrared light and scale it down so humans can sense it. It’s still not the same way animals experience it, but it gives a sense of what they see and hear. A similar show is planned at Agnes Scott College in Atlanta this winter. Meanwhile, researchers that developed the system are also working on an LCD version.

More information:


26 October 2009

Virtual World And Reality Meet

Virtual reality is entering a new era of touch-sensitive tiles and immersive animations. Researchers in Barcelona have created a unique space at the cutting edge of digital immersion. Researchers built the experience induction machine as part of the PRESENCCIA project to understand how humans can exist in physical and virtual environments. It may look like a fun diversion, but the experience induction machine is the result of some fundamental scientific research. One of the key challenges was to create a credible virtual environment. To do those researchers had to understand how our brains construct our vision of the world. They want to move beyond the simple interface of keyboard, screen and mouse. Moving to Austria, researchers are controlling a virtual reality system using brain-computer interfaces.

These types of systems could one day help people with disabilities. Graz researchers are also developing similar tools. Once the sensors are in place the user concentrates on an icon they want as it lights up. For the brain computer interface electrodes are attached to the head to be able to measure brain currents. The task of the person then is to watch the icons flashing in a random sequence and the brain will react to the icon which I want and that response the computer can recognise, and that way we can control external devices. Each time the icon flashes the brain reacts, and the computer monitors that reaction and then carries out the command. When we interact with a virtual world on a human scale then on some level we believe it to be real. The researchers believe that these systems are the future of human computer interaction, and an important step away from current technology.

More information:



20 October 2009

Radio Waves See Through Walls

University of Utah engineers showed that a wireless network of radio transmitters can track people moving behind solid walls. The system could help police, firefighters and others nab intruders, and rescue hostages, fire victims and elderly people who fall in their homes. It also might help retail marketing and border control. By showing the locations of people within a building during hostage situations, fires or other emergencies, radio tomography can help law enforcement and emergency responders to know where they should focus their attention. Their method uses radio tomographic imaging (RTI), which can see, locate and track moving people or objects in an area surrounded by inexpensive radio transceivers that send and receive signals. People don't need to wear radio-transmitting ID tags. The study involved placing a wireless network of 28 inexpensive radio transceivers - called nodes - around a square-shaped portion of the atrium and a similar part of the lawn. In the atrium, each side of the square was almost 14 feet long and had eight nodes spaced 2 feet apart. On the lawn, the square was about 21 feet on each side and nodes were 3 feet apart.

The transceivers were placed on 4-foot-tall stands made of plastic pipe so they would make measurements at human torso level. Radio signal strengths between all nodes were measured as a person walked in each area. Processed radio signal strength data were displayed on a computer screen, producing a bird's-eye-view, blob-like image of the person. A second study detailed a test of an improved method that allows ‘tracking through walls’. That study has been placed on arXiv.org, an online archive for preprints of scientific papers. The study details how variations in radio signal strength within a wireless network of 34 nodes allowed tracking of moving people behind a brick wall. The wireless system used in the experiments was not a Wi-Fi network like those that link home computers, printers and other devices. Researchers used Zigbee network - the kind of network often used by wireless home thermostats and other home or factory automation.

More information:



18 October 2009

Merging Video with Maps

A novel navigation system under development at Microsoft aims to tweak users' visual memory with carefully chosen video clips of a route. Developed with researchers from the University of Konstanz in Germany, the software creates video using 360-degree panoramic images of the street that are strung together. Such images have already been gathered by several different mapping companies for many roads around the world. The navigation system, called Videomap, adjusts the speed of the video and the picture to highlight key areas along the route. Videomap also provides written directions and a map with a highlighted route. But unlike existing software, such as Google Maps or MapQuest, the system also allows users to watch a video of their drive. The video slows down to highlight turns or speeds up to minimize the total length of the clip. Memorable landmarks are also highlighted, though at present the researchers have to select them from the video manually.

Algorithms also automatically adjust the video to incorporate something researchers call ‘turn anticipation’. Before a right-hand turn, for example, the video will slow down and focus on images on the right-hand side of the street. This smoothes out the video and draws the driver's attention to the turn. Still images of the street at each turn are also embedded in the map and the written directions. The system was tested on 20 users, using images of streets in Austria. The participants were given driving directions using the standard map and text, as well as thumbnails for each intersection. Each participant was allotted five minutes to study the information. The drivers were then shown a video simulation of the drive and asked which way the car should turn at various points along the way. They were then asked to do the same thing for a different route, this time using Videomap directions.

More information:

11 October 2009

IEEE VS-GAMES '10 Conference

The 2nd IEEE International Conference in Games and Virtual Worlds for Serious Applications 2010 (VS-GAMES 2010) will be held between 25-26 March, in Braga, Portugal. The use of virtual worlds and games for serious applications has emerged as a dominating force in training, education and simulation due to the focus on creating compelling interactive environments at reduced costs by adopting commodity technologies commonly associated with the entertainment industries. This field is informed by theories, methods, applications and the state-of-the-art in a number of areas based on technological principles and innovation, advances in games design, pedagogic methodologies and the convergence of these fields.

While the serious games community has made it possible to bring together such diverse fields, further academic and industrial collaboration is needed in further defining, formalising and applying the standards and methodologies for the future. VS-GAMES 2010 is the primary conference dedicated to serious games and virtual worlds presenting state of the art methods and technologies in multidisciplinary fields outlined above. The aim of this international conference is to encourage an exchange of knowledge and experience in this cross-disciplinary area and its application to all aspects of the use of games and virtual worlds in serious applications. The best technical full papers will be published on a special issue of Elsevier's Computers & Graphics.

More information:


07 October 2009

Mobile AR Issues

The momentum building behind AR has been fuelled by the growing sophistication of cellphones. With the right software, devices like the iPhone can now overlay reviews of local services or navigation information onto scenes from the phone's camera. A good example of what is possible today are the AR features in an app released by Yelp, a website that collects shop and restaurant reviews for cities in North America, Ireland and the UK. The company's app, for the iPhone 3GS released last month, uses the phone's camera to display images of nearby businesses, and then lists links to relevant reviews. Yelp's app relies on the GPS receiver and compass that are built into the handset. Together, these sensors can identify the phone's location and orientation, allowing such apps to call up corresponding information. Other AR apps include virtual Post-it notes that you can leave in specific places, and a navigation system that marks out a route to your destination. Meanwhile, companies are working on games in which characters will appear to move around real environments. However, when iPhone was tested in downtown San Francisco, the error reported by the GPS sensor was as great as 70 metres, and the compass leapt through 180 degrees as the phone moved past a metal sculpture. Yelp says the app's AR features are a ‘very early iteration’ that the company will improve as it gets feedback.

Some researchers doubt whether high-accuracy GPS systems will ever be small or efficient enough to incorporate into mobile phones. Others suggest achieving the sub-metre positioning accuracy that really good AR demands, mobile devices will have to analyse scenes, not just record images. One way to achieve this is to combine laser scans of a city with conventional images to create a three-dimensional computer map. Each building in the map would be represented by a block of the right size and shape, with the camera image of the real building mapped onto it. The phone could use GPS and compass data to get a rough location and orientation reading, then compare the camera image with the model to get a more precise location fix. Various interested companies are building 3D city maps, including Google and Microsoft, but it is doubtful that such maps will achieve truly global coverage. The models will also inevitably lag behind reality, as buildings are knocked down or new ones appear. So such shortcomings have inspired other researchers to consider a ‘crowd-sourced’ solution to speed up data collection. In this approach, software would pull photographs of a location from the internet and stitch the pictures together to create a 3D image of that place. Such images could also have GPS information attached, and even though the coordinates might be slightly inaccurate, combining many photographs of the same place would fine-tune the location information embedded in the resulting composite image.

More information:


06 October 2009


Layar is a free application on mobile phones which shows what is around the user by displaying real time digital information on top of reality through the camera of the mobile phone. Layar is a global application, available for the T-Mobile G1, HTC Magic and other Android phones in all Android Markets. It also comes pre-installed on the Samsung Galaxy in the Netherlands. Layar is used by holding the phone in front of the user’s perspective (like a camera) and information is displayed on top of the camera display view. For all points of interest which are displayed on the screen, information is shown at the bottom of the screen.

On top of the camera image (displaying reality) Layar adds content layers. Layers are the equivalent of web pages in normal browsers. Just like there are thousands of websites there will be thousands of layers. One can easily switch between layers by selecting another via the menu button, pressing the logobar or by swiping your finger across the screen. Layar combines GPS, camera, and compass to identify surroundings and overlay information on screen, in real time.

More information: