31 October 2008

Undressing the Human Body

Imagine you are a police detective trying to identify a suspect wearing a trench coat, baggy pants and a baseball cap pulled low. Or imagine you are a fashion industry executive who wants to market virtual clothing that customers of all shapes and sizes can try online before they purchase. Perhaps you want to create the next generation of “Guitar Hero” in which the user, not some character, is pumping out the licks. The main obstacle to these and other pursuits is creating a realistic, 3D body shape — especially when the figure is clothed or obscured. Researchers have created a computer program that can accurately map the human body’s shape from digital images or video. This is an advance from current body scanning technology, which requires people to stand still without clothing in order to produce a 3D model of the body. With the new 3D body-shape model, the scientists can determine a person’s gender and calculate an individual’s waist size, chest size, height, weight and other features.

The potential applications are broad. Besides forensics and fashion, Black and Balan’s research could benefit the film industry. Currently, actors must wear tight-fitting suits covered with reflective markers to have their motion captured. The new approach could capture both the actors’ shape and motion, while doing away with the markers and suits. In sports medicine, doctors would be able to use accurate, computerized models of athletes’ bodies to better identify susceptibility to injury. In the gaming world, it could mean the next generation of interactive technology. Instead of acting through a character, a camera could track the user, create a 3D representation of that person’s body and insert the user into the video game. The researchers stress the technique is not invasive; it does not use X-rays, nor does it actually see through clothing. The software makes an intelligent guess about the person’s exact body shape.

More information:



29 October 2008

Maps You Can Feel

“Eyes on the future” is the mantra of the ‘World Sight Day’ held this month to raise awareness of blindness and vision impairment. New technologies, developed by European researchers offering the visually impaired greater independence, live up to this vision. Many of the most innovative systems have been created by a consortium of companies and research institutes working in the EU-funded ENABLED project. The project has led to 17 prototype devices and software platforms being developed to help the visually impaired, two of which have been patented. Guide dogs, canes, Braille and screen readers that turn digital text into spoken audio all help to improve the lives of the blind or severely visually impaired, but none of these tools can make up for having a friend or relative accompany a blind person around and assist them in their daily life. However, a human helper is not always available. Activities that the sighted take for granted, such as going for a walk in the park or trying out a new restaurant, becomes an odyssey for the visually impaired, particularly when they do not already know the route by heart. A guide dog can help them avoid dangers in the street, be it a curb or a lamppost, but it cannot show them a new route. People can be asked for directions, but following them is another matter entirely when you cannot read street signs or see landmarks. Those barriers have typically prevented the visually impaired from exploring the world around them on their own, but now, with the new technologies, they can surmount some of these barriers.

To achieve that, the project partners worked in two broad areas. On the one hand, they developed software applications with tactile, haptic and audio feedback devices to help visually impaired people feel and hear digital maps of where they want to go. On the other hand, they created new haptic and tactile devices to guide them when they are out in the street. One of the patented prototypes, called VITAL, allows users to access a tactile map of an area. Using a device akin to a computer mouse they can move a cursor around the map and small pins will create shapes under the palm of their hand. The device could produce the sensation of a square block to define a building, or form into different icons to depict different shops and services – an ‘H’ for a hospital, for example. Having obtained a ‘mental image’ of the map from the computer, users can then take the route information with them when they venture outside. For that purpose, the project partners used a commercially available navigation aid called the Trekker, which uses GPS to guide users as they walk around, much like a navigation system in a car. However, the Trekker gives only spoken directions, something that can be disconcerting for blind people, who may not want to draw attention to themselves. The device can often be hard to hear in noisy, city environments. The ENABLED team therefore developed prototypes to provide directions through tactile and haptic feedback, rather than via audio alone. One patented device developed by the project team, the VIFLEX, looks similar to a TV remote control with a movable plate at the front. The user rests his thumb on the plate, which tilts in eight directions to guide users based on the directions given by the Trekker.

More information:


28 October 2008

AR Makes Commercial Headway

Media Power is a New York City–based firm that develops mobile communications applications. Media Power is part of a vanguard of organizations that is working to commercialize augmented-reality (AR) technology, which can be characterized as the timely overlay of useful virtual information onto the real world. AR incorporates three key features: virtual information that is tightly registered, or aligned, with the real world; the ability to deliver information and interactivity in real time; and seamless mixing of real-world and virtual information. When explaining AR technology, it often invokes the virtual first-down marker seen as a yellow stripe in televised football games. The technical challenge of AR is to do something similar but more complex with the live video feed from a cell phone camera and without the 10-second delay required to generate the virtual marker.

Although AR has mostly lived in the lab, the recent emergence of highly capable mobile devices is fueling a surge in interest. Players look at cards through a camera and watch animated versions of the game characters on the cards fight one another. The ability is based on identifying real-world objects and estimating their locations in space. AR-like technology is also finding its way into industrial manufacturing. Inter­Sense, offers process-verification systems that use sensors and cameras to track the positions and motions of tools as workers do their jobs. Computers then compare the actual tool movements with ideal procedures to detect errors or confirm correct completion, information that is then provided graphically to the workers in real time. Media Power will also be introducing cell phone–enabled museum exhibit tours based on the same technology, as well as the means by which consumers can trigger delivery of targeted advertising by directing camera phones at brand logos.

More information:


25 October 2008

Environmental VEs Workshop

On Wednesday, 12th of November another workshop is taking place at the Serious Games Institute (SGI). This month’s focus is on environmental issues. The title of the workshop is ‘Using virtual environments to support environmental issues’.

The use of virtual and smart spaces are significantly changing how we design and interact with our real environments. This workshop will explore some of the ways that virtualising spaces through virtual environments and serious games can help to reshape the debate about the environment.

More information:


21 October 2008

Real Pilots And 'Virtual Flyers'

Stunt pilots have raced against computer-generated opponents for the first time — in a contest that combines the real and the ‘virtual’ at 250 miles per hour. Using technology developed, in part, by a University of Nottingham spin-out company, an air-race in the skies above Spain saw two stunt pilots battle it out with a ‘virtual’ plane which they watched on screens in their cockpits. The ‘virtual’ aircraft was piloted by a computer-gamer who never left the ground, but could likewise see the relative location of the real planes on his own computer screens as the trio swooped around each other during the ‘Sky Challenge’ race. The event could pave the way for massive online competitions, and also demonstrates the power and scope of the very latest in GPS and related systems.

The 'Sky Challenge' was organised by Air Sports Ltd, a New Zealand company which specialises in advanced sports TV technology. The technology that made 'Sky Challenge' possible was supplied by the Geospatial Research Centre (GRC), a joint venture between The University of Nottingham, the University of Canterbury in New Zealand and Canterbury Development Corporation. They were able to merge an electronically-generated world with the real world using a combination of satellite navigation technology (GPS, or global positioning system) and inertial navigation system technology (INS). The result of the Sky Challenge was a narrow victory for one of the real pilots — but he was only 1.5 seconds ahead of his virtual rival.

More information:


13 October 2008

IEEE VS-GAMES '09 Conference

The first IEEE International Conference in Games and Virtual Worlds for Serious Applications 2009 (VS-GAMES 2009) will be held between 23-24 March, Coventry University, UK. It aims to meet the significant challenges of the cross-disciplinary community that work around these serious application areas by bringing the community together to share case studies of practice, to present new frameworks, methodologies and theories and to begin the process of developing shared cross-disciplinary outputs. In order to achieve this main aim the conference will pioneer new methods for bringing together and supporting communities of practice emerging in themed areas beyond the duration of the conference. Using the conference as an ignition to support a wider aspiration to form and sustain a community of practice around the field. To achieve this, the team at the Serious Games Institute (SGI) will use innovative software called Intronetworks, which allows conference participants to create their own profile allowing them to identify like-minded and complementary skilled colleagues.

The term 'Serious Games' covers a broad range of applications from flash-based animations to totally immersive and code driven 3D environments where users interface with large volumes of data through sophisticated and interactive digital interfaces. This shift towards immersive world applications being used to support education, health and training activities marks the beginning of new challenges that offer real scope for collaborative and multi-disciplinary research solutions, and real opportunities for innovative development. We invite researchers, developers, practitioners and decision-makers working with or applying serious games in their communities to present papers in the following two main streams of the conference: games and virtual world applications for serious applications. The conference will explore games and virtual worlds in relation to: applications, methodologies, theories, frameworks, evaluation approaches and user-studies.

More information:


07 October 2008

Pervasive Open Infrastructure

Pervasive computing provides a means of broadening and deepening the reach of information technology (IT) in society. It can be used to simplify interactions with Web sites, provide advanced location-specific services for people on the move, and support all aspects of citizens' life in the community. The Construct system identifies the best-of-breed techniques that have been successfully implemented for pervasive systems. They are collected together into a middleware platform, an intermediary between sensors and services. Construct provides a uniform framework for situation identification and context fusion, while providing transparent data dissemination and node management. Construct's basic architecture (Figure below) relies on services and sensors that access a distributed collection of nodes, which are responsible for aggregating data from the sensors. Construct regards all data sources as sensors: for example, physical ones for temperature, pressure, and location are included along with virtual ones that access digital and Web resources.

A sensor injects information into Construct's resource description framework (RDF) triple-store database. The triple store provides a set of common descriptions for concepts across domains. This model means that different sensors can be used to detect the same information. Location may be sensed directly from RFID (radio frequency identification) or Ubisense, or inferred from diary or proximity information. Yet all this information can be accessed by services using a common data model. To request information from the database, applications query the triple store using the standard SPARQL language. On the other hand, it does not provide remote access to sensors: instead, sensor data is transmitted around the network using the Zeroconf protocol for node discovery and gossiping to exchange data. Gossiping means that nodes randomly synchronise their triple stores. This can lead to substantial background communications traffic, but increases the robustness of the system, since a node failure will not cause sensed data to be lost.

More information:


01 October 2008

Major Incident Training Workshop

Major incident training can be difficult and expensive using traditional methods. In this workshop we explore different approaches to solving these challenges through the use of game technology and virtual world applications. The workshop will be held on the 8th October 2008 at Serious Games Institute.

The aim is to discuss the issues of virtual training in the area of healthcare, training and disaster management. Colleagues from all over the UK are invited to join the hub area in major incident training at the Serious Games Institute, and help to develop a roadmap for future work in the field.

More information: