31 March 2008

IEEE VR 2008 Workshop

A few weeks ago, a colleague of mine has presented our latest results in urban modelling and navigation in the IEEE VR 2008 Workshop, held in Silver Baron Ballroom D, Reno, Nevada, 8th to 9th March 2008. The title of the paper is “Towards Rapid Generation and Visualisation of Large 3D Urban Landscapes for Mobile Device Navigation”. The paper presented a procedural 3D modelling solution for mobile devices is presented based on scripting algorithms allowing for both the automatic and also semi-automatic creation of photorealistic quality virtual urban content.

The combination of aerial images, GIS data, 2D ground maps and terrestrial photographs as input data coupled with a user-friendly customized interface permits the automatic and interactive generation of large-scale, accurate, geo-referenced and fully-textured 3D virtual city content, content that can be specially optimized for use with mobile devices but also with navigational tasks in mind. A user-centred mobile virtual reality (VR) visualisation and interaction tool operating on PDAs for pedestrian navigation is also discussed. Via this engine, the import and display of various navigational file formats (2D and 3D) is supported, including a comprehensive front-end user-friendly graphical user interface providing immersive virtual 3D navigation.

A draft version of the paper can be downloaded from here.

30 March 2008

Magnetic Levitation Haptic Interface

Unlike most other haptic interfaces that rely on motors and mechanical linkages to provide some sense of touch or force feedback, the device developed by a research professor in Carnegie Mellon's Robotics Institute, uses magnetic levitation and a single moving part to give users a highly realistic experience. Users can perceive textures, feel hard contacts and notice even slight changes in position while using an interface that responds rapidly to movements. Putting the instrument in the hands of other researchers is critical in a young, developing field such as haptic technology. Though haptic interfaces have uses in engineering design, entertainment, assembly, remote operation of robots, and in medical and dental training, their full potential has yet to be explored. That's particularly the case for magnetic levitation haptic interfaces because so few have been available for use by researchers.

The system eliminates the bulky links, cables and general mechanical complexity of other haptic devices on the market today in favour of a single lightweight moving part that floats on magnetic fields. At the heart of the maglev haptic interface is a bowl-shaped device called a flotor that is embedded with six coils of wire. Electric current flowing through the coils interacts with powerful permanent magnets underneath, causing the flotor to levitate. A control handle is attached to the flotor. A user moves the handle much like a computer mouse, but in three dimensions with six degrees of freedom -- up/down, side to side, back/forth, yaw, pitch and roll. Optical sensors measure the position and orientation of the flotor, and this information is used to control the position and orientation of a virtual object on the computer display. As this virtual object encounters other virtual surfaces and objects, corresponding signals are transmitted to the flotor's electrical coils, resulting in haptic feedback to the user.

More information:


26 March 2008

Lighting Using Indirect Light

In the ever more complex world of computer games, developers are constantly looking for new ways to make the playing experience more life-like. One problem that had remained unsolved was how to quickly simulate the gradation of shadows caused by indirect light bouncing off objects – until a recent breakthrough by researchers at UCL Computer Science. They have developed a fast method that models the path of light as it bounces off surfaces. The result is that, on top of the broad distinction between light and dark regions that results from a simple model including only direct light, indirect light can be factored into simulated scenes.

Graphics are far more realistic, with more variation in shade on an object, and hues of reflected light adding extra detail. Now, with funding from the government’s Technology Strategy Board (TSB), researchers will work with software company Geomerics to develop the system to work for moving, as well as static, scenes. The TSB’s Technology Programme has granted £525,000 over three years for the work. £195,000 of this goes to UCL Computer Science, where a new postdoctoral position will be established for two years. The third year of work will focus on commercialisation of the software.

More information:


21 March 2008

3D Camera With 12,616 Lenses

The camera you own has one main lens and produces a flat, two-dimensional photograph, whether you hold it in your hand or view it on your computer screen. On the other hand, a camera with two lenses (or two cameras placed apart from each other) can take more interesting 3-D photos. But what if your digital camera saw the world through thousands of tiny lenses, each a miniature camera unto itself. You'd get a 2-D photo, but you'd also get something potentially more valuable: an electronic "depth map" containing the distance from the camera to every object in the picture, a kind of super 3-D. Stanford electronics researchers, are developing such a camera, built around their ‘multi-aperture image sensor’. They've shrunk the pixels on the sensor to 0.7 microns, several times smaller than pixels in standard digital cameras. They've grouped the pixels in arrays of 256 pixels each and they're preparing to place a tiny lens atop each array. In fact, if their prototype 3-megapixel chip had all its micro lenses in place, they would add up to 12,616 cameras.

Point such a camera at someone's face, and it would, in addition to taking a photo, precisely record the distances to the subject's eyes, nose, ears, chin, etc. One obvious potential use of the technology: facial recognition for security purposes. But there are a number of other possibilities for a depth-information camera: biological imaging, 3-D printing, creation of 3-D objects or people to inhabit virtual worlds, or 3-D modeling of buildings. The technology is expected to produce a photo in which almost everything, near or far, is in focus. But it would be possible to selectively defocus parts of the photo after the fact, using editing software on a computer. Knowing the exact distance to an object might give robots better spatial vision than humans and allow them to perform delicate tasks now beyond their abilities. The first benefit of the Stanford technology is straightforward: Smaller pixels mean more pixels can be crowded onto the chip. The second benefit involves chip architecture. The researchers are now working out the manufacturing details of fabricating the micro-optics onto a camera chip.

More information:


12 March 2008

Grand Theft Auto 3

Grand Theft Auto 3 (GTA3) has not only come to represent an era in gaming, but its impact, style and of course the astronomical sales continue to resonate in an industry that is still maturing and finding its feet. Perhaps for the first time, GTA3 gave gamers a glimpse of a world that wasn’t just about a linear progression. Choice in the past had often seemed to be limited to the size of the gun or, if you were lucky, the order in which you tackled missions or worlds. But GTA3 was different; more open, less constrictive. Jumping in a car and just driving around the city wasn’t penalised, in fact, the world encouraged exploration, offering hidden bonuses and a greater chance of evading the ever-present police. Aiding and quite possibly abetting the notion that this was a game that wanted you to look at the nooks and crannies from its third person perspective was the attention to detail shown in the game's now legendary radio stations. A mixture of brilliantly scripted, satirical conversation, original songs and licensed material meant that cruising around in your favourite jalopy was sheer entertainment.

Of course exploration would have been no fun at all had the world in which GTA3 was set not been so well realised – a city of a size never before seen in console games was available to explore and unlock. This was a world that hinted at a life going on both around the player and despite him. The first two Grand Theft Auto games had attracted attention for some slightly questionable missions, ultra violence and an appealing sense of fun, but they were indisputably in a different class to the third in the series. Top-down and limited, the controversial moments in these games never truly garnered the kind of attention that they potentially could have because they largely flew under the radar. Of course, when GTA3 hit the shops this all changed in a heartbeat. The press seized on moments that were always designed to be provocative: the option to take a prostitute into your car, have (out of sight but heavily implied) sex with her and then mug her for the money she charged, for instance, or the encouragement to participate in senseless OTT violence.

More information:


05 March 2008

Butterfly Haptics

Butterfly Haptics are new generation magnetic levitation haptic interfaces that eliminate all the mechanical complexity in favour of a single moving part levitated by magnetic fields. The user's handle is rigidly attached to a lightweight ‘flotor’ that floats between stators with permanent magnets providing strong magnetic fields. The position and orientation of the flotor is tracked by optical sensors. As the user moves the handle through its motion range in 6 DOFs, position information is sent to the user's application. Conversely, forces and torques are sent to the handle from the user's application.

Maglev haptics provides the highest resolution and highest position and force bandwidths of any known method. There is an essentially direct electrodynamic connection between the computer and the hand, conveying gross force and torque effects to the proprioceptive sensors as well as subtle vibratory effects to the skin sensors. The high performance comes at the expense of a small workspace. For many haptic applications, scaling, indexing, and rate control methods can be used to effectively overcome this limitation.

More information:


04 March 2008


Virtualization is the provision of an abstraction between a user and a physical resource in a way that preserves for the user the illusion that he or she could actually be interacting directly with the physical resource. While you could imagine virtualizing any physical resource, the focus of this issue of Queue is the computing machine virtualization that is the current rage. The user gets a high-fidelity copy of what appears to be a complete computer system, while he or she is actually dealing with an abstraction layer known as the VMM (virtual machine monitor) that runs on the real machine and maps resources on behalf of the user.

Abstractions are useful, particularly if they are simple and efficient. The main benefit of any abstraction is the decoupling that it facilitates. With virtualization the user is able not to care about the hardware and how it actually behaves. As long as the performance characteristics are met, the user can also be freed from caring about who operates the hardware, where the hardware is located, and whose logo (if any) is on it. The ultimate extension of this is the utility computing model provided by virtualized compute services. It's worth looking at the benefits of virtualization from two points of view: from the perspective of the user who is above the VMM and from the perspective of the infrastructure provider beneath it.

More information:


02 March 2008

SGI Virtual Heritage Workshop

Serious Games Institute (SGI) is organising a workshop on ‘Culture, Heritage & Tourism Technology’ on March 4th 2008. Focus will be in Virtual World, Wireless, Mobile, Video, Sound, GIS and Augmented Reality Technology Showcases. Leading experts in a range of digital media and communications technologies will be showcasing the use of advanced and innovative technologies to enrich the culture, heritage and tourism experience and make it globally accessible to the widest audience.

The workshop will also feature demonstrations and innovative interactive displays from industry leaders. It is a unique opportunity to get a comprehensive overview of the potential benefits of a holistic and integrated approach to technology solutions for culture, heritage and tourism. Managers, planners and developers of culture, heritage and tourism sites will meet with digital media specialists to explore how to get best value from integrating these technologies to create innovative and high value experiences.

More information: