29 December 2007

Workshop on Virtual Museums Article

Last month, I presented an article with title ‘A Mobile Framework for Tourist Guides’ to the workshop on Virtual Museums, which was held in conjunction with the VAST 2007 conference. This article, presents how the LOCUS multimodal mobile framework can be used for tourist guides which can be any open-air heritage exhibition. The main objective of the multimodal heritage system is to provide advanced LBS to mobile users delivered through a web-browser interface. The mobile system allows tourists to switch between three different presentation guides including map, virtual and augmented reality. Localisation of the visitors is established based on position and orientation sensors which are integrated on light-weight handheld devices. To illustrate some of the capabilities of the mobile guide two case-studies were presented: one for the national Swiss park while the second was developed for City University.

Using the City University mobile guide, pedestrians can navigate intuitively within the real environment using both position and orientation information on a mobile virtual environment. Additional functionality such as dynamic switching of camera viewpoint from the pedestrian view to a birds-eye view can be accessed from the menu buttons. Another important aspect of the guide is that the digital compass can be also used as a virtual pointer to provide useful information about the surroundings such as ‘what is the name of the building?’ or ‘how far it is located from me?’ etc. Routing tools are developed to provide advanced navigational assistance to mobile users based upon the experience of previous users, and so may suggest different routes depending on whether the journey is to be taken.

A draft version of the paper can be downloaded from here.

28 December 2007

31-inch OLED Display

South Korean display screen maker Samsung SDI announced on Thursday it had developed a 31-inch ultra-thin organic screen, raising the stakes in an accelerating worldwide race for organic displays. Samsung SDI's announcement comes after Sony Corp began sales of 11-inch OLED TVs in November which has showcased a 27-inch prototype OLED TV in the past. Flat screen makers are increasingly looking at active-matrix organic light-emitting diode (AM-OLED) as a growth driver because they produce brighter images and use less power. Samsung SDI declined to say when the company's 31-inch OLED screen would be mass-produced.

Samsung SDI is also planning to mass-produce 14-inch screens in 2008. Samsung SDI said its 31-inch module is only 4.3 mm thick, or one-tenth of a typical liquid crystal display panel, and consumed less than half the electricity needed for a 32-inch LCD screen. Samsung SDI also said the lifespan of its display was 35,000 hours, the best performance among existing AM-OLEDs. While small AM-OLED displays are already in use on premium mobile phones and media players, large-sized models are particularly difficult and costly to make. Shipments of Sony's 11-inch OLED TVs have been limited to 2,000 units per month.

More information:



27 December 2007


The Russian Space Agency (Roskosmos) announced the successful launch of three new GLONASS-M satellites on Friday 26th October. The satellites were fired into orbit atop a Proton booster from Baikonur. This launch represents the first tangible developments from the government’s 9.88 billion rubles (£190 million) investment in the system in 2007 and puts the Russians on track to have the constellation fully operable again by 2009. The Global Navigation Satellite System (GLONASS) is based on a constellation of active satellites which continuously transmit coded signals in two frequency bands, which can be received by users anywhere on the Earth's surface to identify their position and velocity in real time based on ranging measurements. The system is a counterpart to the United States Global Positioning System (GPS) and both systems share the same principles in the data transmission and positioning methods. GLONASS is managed for the Russian Federation Government by the Russian Space Forces and the system is operated by the Coordination Scientific Information Center (KNITs) of the Ministry of Defense of the Russian Federation.

The operational space segment of GLONASS consists of 21 satellites in 3 orbital planes, with 3 on-orbit spares. The three orbital planes are separated 120 degrees, and the satellites within the same orbit plane by 45 degrees. Each satellite operates in circular 19,100 km orbits at an inclination angle of 64.8 degrees and each satellite completes an orbit in approximately 11 hours 15 minutes. The ground control segment of GLONASS is entirely located within former Soviet Union territory. The Ground Control Center and Time Standards is located in Moscow and the telemetry and tracking stations are in St. Petersburg, Ternopol, Eniseisk, Komsomolsk-na-Amure. The first GLONASS satellites were launched into orbit in 1982. Two Etalon geodetic satellites were also flown in the 19,100 km GLONASS orbit to fully characterize the gravitational field at the planned altitude and inclination. The original plans called for a complete operational system by 1991, but the deployment of the full constellation of satellites was not completed until late 1995 / early 1996. GLONASS was officially declared operational on September 24, 1993 by a decree of the President of the Russian Federation.

More information:




13 December 2007

3D Camera

Imagine playing a flight simulation video game that lets you guide the aircraft with your hands alone. Or think about sparring with a virtual boxing opponent by doing nothing but standing up and throwing punches in the air. A company called 3DV Systems produced a 3D camera that plugs directly into a PC, is designed to let gamers' hands be the only controllers they need. The ZCam works by emitting short infrared pulses and then measuring the reflections off objects. Sophisticated software algorithms interpret those reflections in such a way that the system can judge the distance of--and distinguish between--various objects and, say, discern someone's hands. Because it relies strictly on the reflection of the light from the camera, it doesn't need ambient light to work, allowing ZCam to function in a dark room, or with any kind of background, bright, dark or otherwise.

The software can key in on a gamer's hands, and even between his or her fingers, and can run various applications based on what that person does with their head, hands, fingers, or torso. The technology has applications beyond video games, as well, particularly because it has some ability to be autonomously applied to existing software. But while 3DV Systems has clients in many different fields, including the military, its focus for now is on video games and how the ZCam technology could make a dent in the traditional interface market. The company doesn't intend to put ZCam itself on the market as a consumer product. Rather, it intends to license the technology to others, potentially a console maker such as Microsoft, Sony, or Nintendo, or to PC game developers.

More information:



09 December 2007

The Future of Virtual Surgeries

Video games that simulate the experiences of combat, space travel and car theft have achieved a startling level of fluidity and detail in recent years to create increasingly realistic virtual worlds. When it comes to medicine, however, the graphics that doctors and surgeons have to work with are closer to the grainy, cartoonish images of the Atari generation than they are to the video games Assassin's Creed or Grand Theft Auto. The computing power required to render virtually realistic organs and soft tissue is still unavailable to most physicians (except for a handful with access to supercomputers), but it's coming soon.

Within the next five years when medical professionals will be able to scan patients prior to procedures and create three-dimensional virtual images of their bodies, which they can store in computers and use for practice before performing the real surgeries. Tissue, muscle and skin are elastic and behave like a spring, and their characteristics can be expressed using classical mathematical theory. To develop virtual models of patients, physicians must create geometric representations of their tissue and organs using either magnetic resonance imaging (MRI) or computed tomography (CT). The information that these scanners currently provide, however, is in the form of numbers representing shades of gray, which are insufficient for creating accurate, real-color, three-dimensional renderings.

More information:


04 December 2007

Handheld Mixed Reality

A few months ago, I have ported MRGIS into VAIO UX Handheld PC which is an Ultra Mobile PC (UMPC). Similarly to the laptop version, three visualization domains are supported including: virtual reality, augmented reality and mixed reality in order to get the best possible representation for visual exploration. On each domain, four different types of geovisualisation and navigation aids can be superimposed such as geo-referenced 3D maps, 2D digital maps, spatial 3D sound and 3D/2D textual annotations.

In the above example, a three-dimensional map of City University campus is superimposed onto a marker card using the augmented reality domain. Interactions can be performed using the mouse or the keyboard of the UMPC, the MRGIS user-centered graphical user-interface or by natural ways (by manipulating the card). However, the greatest advantage of the handheld mixed reality system is that it can be used in outdoor environments much more effectively than third-generation mobile phones and personal digital assistants (PDAs).

30 November 2007

VAST 2007 Article

I have just returned from the 8th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST '07) which was held in Brighton, UK, between the 26th to the 30th November 2007. The title of the article is ‘Sense-Enabled Mixed Reality Museum Exhibitions’ and presented a novel pervasive mixed reality framework to a sensor network capturing ambient noise that can be used to create tangible cultural heritage exhibitions. Localisation of the visitors can be established in a hybrid manner based on machine vision and a wireless sensor network allowing visitors to interact naturally or with the help of sensors.

In terms of interface design, a multimodal mixed reality visualisation domain allows for an audio-visual presentation of cultural heritage artifacts. The proposed framework demonstrates the importance and usefulness of sensor-based MR technology to intuitive cultural heritage exhibitions. This enables non-specialist developers, who may be engineers, artists or other non-programmers, to rapidly develop and test pervasive heritage applications in a friendly environment, without having an in-depth knowledge of the hardware specifics of the implementation.

A draft version of the paper can be downloaded from here.

25 November 2007

Golden Age of Videogames

BBC News has published an article under the technology sector dedicated to the success of videogames. It is not just that the interactive experiences are getting ever more immersive, or the industry is being taken ever more seriously, but hardware and software sales are up significantly on last year - buoyed by a new generation of consoles and the work of developers who are beginning to exploit the tools they have at their disposal. The industry also looks to have found a way to blend the loose attractions of casual and social gaming with the hardcore experiences beloved by the seasoned player. Long-standing gamers were spending more money on games and a whole new audience had been introduced to gaming for the first time.

This year will be remembered as the year the Wii took centre stage as the console of choice for families, the year PlayStation 3 finally showed its promise in real terms and the Xbox 360 hit its stride with the 5th anniversary of online service Xbox Live. It is also the year of the handheld with Nintendo DS and PSP continuing to sell explosively, the year PC gaming began its renaissance and developers got to grips with tools that allowed them to tell stories in new, dynamic ways. The industry has started showing signs of maturity and who knows what will be the future of gaming.

More information:


20 November 2007

RCUK Business Plan Competition

During the 19th to 20th November, I have attended a two-day training workshop held in University of Warwick, Coventry. The workshop was organized by EPSRC and it was part of the ‘RCUK Business Plan Competition’. The competition tries to help researchers and academics to turn great research into great business. The aim of the workshop was to train participants in how to write an effective business proposal through the expertise of Qi3. Qi3 provided us with a series of presentations and exercises in order to promote commercial expertise on the basics of technology-based corporations, high-tech start-ups and SMEs.

The RCUK Business Plan Competition provides researchers who have ideas with commercial potential the skills, knowledge and support needed to develop a first-rate business plan. This is provided through expert trainers, coaches and mentors. It is worth-mentioning that the competition is open to researchers based in UK Higher Education Institutions (HEI's) or Public Sector Research Establishments (PSRE) that are eligible to hold Research Council grants from across the whole spectrum of academic research within the remit of the seven Research Councils – from the arts and biosciences, to environmental physical and social sciences to technology.

More information:

17 November 2007

SIGDOC 2007 Article

Last month, a paper I have co-written with a colleague, has been presented in the 25th ACM International Conference on Design of Communication (SIGDOC '07), in El Paso, USA. The title of the paper is ‘Design Experiences of Multimodal Mixed Reality Interfaces’ and presented an overview of the most significant issues when designing mixed reality interfaces including displays, tracking, interface design, interactivity and realism. Multimodal issues regarding visualization and interaction are integrated into a single interface. Three case studies in diverse areas including automotive (see figure below), archaeology and navigation were presented, as well as the experiences gained and lessons learned.

The paper anticipates that the multimodal features allow participants to select the most appropriate medium for visualizing and interacting with virtual information. The capability of dynamic switching between cyber and MR worlds allows remote users to bring VR and AR into their own environment. In addition, participants can change dynamically interaction modes to interact with the virtual information in a user-centered mode as well as through a computer-centered approach. This might be very useful for elderly and disabled people, which have difficulties in doing particular operations or going to specific places.

A draft copy of the paper can be downloaded from here.

10 November 2007

3D Landmarks from Vacation Photos

A presentation in October at the International Conference on Computer Vision showed how photos from online sites such as Flickr can be used to create a virtual 3D model of landmarks, including Notre Dame Cathedral in Paris and the Statue of Liberty in New York City. Online photo-sharing Web sites such as Flickr and Google are popular because they offer a free, easy way to share photos. Flickr now holds more than 1 billion photos; a search for "Notre Dame Paris" finds more than 80,000 files. The study authors, experts in computer vision, believe this is the world's most diverse, and largely untapped, source of digital imagery. But the freely available photos do present a challenge: these are holiday snapshots and personal photos, not laboratory-quality research images. While some may be postcard-perfect representations of a setting, others may be dark, blurry or have people covering up most of the scene.

To make the 3D digital model, the researchers first download photos of a landmark. For instance, they might download the roughly 60,000 pictures on Flickr that are tagged with the words "Statue of Liberty." The computer finds photos that it will be able to use in the reconstruction and discards pictures that are of low quality or have obstructions. Photo Tourism, a tool developed at the UW, then calculates where each person was standing when he or she took the photo. By comparing two photos of the same object that were taken from slightly different perspectives, the software applies principles of computer vision to figure out the distance to each point. In tests, a computer took less than two hours to make a 3D reconstruction of St. Peter's Basilica in Rome, using 151 photos taken by 50 different photographers. A reconstruction of Notre Dame Cathedral used 206 images taken by 92 people. All the calculations and image sorting were performed automatically. Creating 3D reconstructions of individual buildings is a first step in a long-term effort to recreate an entire city using online photographs.

More information:


07 November 2007

Stratford Unplugged

The historic birthplace of Shakespeare is embracing the latest Wi-Fi technology to help tourists find their way around the famous landmarks. Conventional guidebooks, by their nature, have often been superseded the minute they come off the press, but obsolete recommendations and out-of-date opening hours will now be a thing of the past. For £8 a day, visitors can hire a handheld electronic organiser, providing an interactive map which automatically points out the nearest literary must-sees as they wander round the town. The online virtual tour guide provides tourist information and special offers from local businesses through a personal digital assistant (PDA).

Through the implementation of the UK’s first wireless broadband project for tourists, visitors will have the chance to enjoy a new 21st century experience of the 16th century genius. The Stratford Unplugged initiative is the result of a collaboration between Coventry University, Staffordshire University, BT, Hewlett Packard and the Stratford Town Management Partnership. To enable the scheme, BT has installed a selection of Wi-Fi hotspots in businesses across the town (e.g. hotels, shops and tourist attractions), providing coverage through its BT Openzone service. PDAs are then hired from the tourist information office, giving internet access throughout the day.

More information:

05 November 2007

Digital Global Magnetic Map

The first global map of magnetic peculiarities - or anomalies - on Earth has been assembled by an international team of researchers. Magnetic anomalies are caused by differences in the magnetisation of the rocks in the Earth's crust. Many years of negotiation were required to obtain confidential data from governments and institutes. Scientists hope to use the map to learn more about the geological composition of our planet. The World Digital Magnetic Anomaly Map (WDMAM) is available through the Commission for the Geological Map of the World. The global map shows the variation in strength of the magnetic field after the Earth's dipole field has been removed. After removal of the dipole field, the remaining variations in the field (few hundreds of nT) are due to changes in the magnetic properties of the crustal rocks.

Hot colours (reds) indicate high values; cold colours (blues) indicate low or negative values. As well as revealing ore deposits, magnetic anomalies can also show areas of ground water and sea weakness zones. It is a useful tool for geologists and geophysics, as well as a teaching resource. The information can be viewed as a flat, two-dimensional map or rendered in 3D on a virtual globe. This magnetic data has been gathered by Champ, a German and Russian-built satellite that has been in orbit since 2001. Now coming to the end of its life, it has charted the entire globe. With the publication of the first edition of the map, a second edition is under way. The upcoming Swarm magnetic field satellites, to be flown by the European Space Agency (Esa), will also help build new detail into the map.

More information:


31 October 2007

Wii Nunchuk

The Wii Nunchuk controller is a secondary controller that adds even more innovation to the next generation of gaming, and does it all with less physical movement. Used in conjunction with the standard Wii remote, certain games need the Nunchuk controller for additional control options. Contoured perfectly to fit a player's hand, the Nunchuk controller builds on the simplicity of the Wii Remote controller. The Nunchuk contains the same three-axis motion sensor found in the Wii Remote, but also includes an analog stick, and two buttons to help assist in character movement. Many games will allow you to control your character's movement with the Nunchuk in your left hand, while your right hand is free to execute the action movements with the Wii Remote. For example, the Nunchuk is particularly useful for games like Wii Boxing. You can use the Nunchuk to punch with your weaker hand, while you use the Wii remote to punch and jab with your predominant hand.

In first-person shooters, the Nunchuk controller carries the burden of movement, freeing you to aim and fire using a more natural motion with the Wii Remote. In a football game, you can make your quarterback elusive with the Nunchuk controller while you search for an open receiver to throw to using the Wii Remote. Serious gamers may even want to use two Nunchuk controllers to gain a fierce competitive edge. Because the Wii Remote and Nunchuk controllers are only relatively dependent on each other, players are free to hold them in whichever hand is most comfortable. Perfectly suitable for either right or left-hand use, the Wii Nunchuk controller grants accessibility not often seen in previous game controllers. Also, the Nunchuk controller doesn't need its own power--it plugs into the Wii Remote controller when it's in use. So there's no need to worry about charging or replacing expensive batteries. Adding a Nunchuk to your Wii system will definitely help you open the doors to the next level of gaming, and seriously step up performance. Just be careful not to knock out your significant other, or bruise the dog, severely, while using one, or two Nunchuk controllers.

More information:


28 October 2007

Rapid 3D Urban Modeling Tool

A few days ago, Sarnoff Corporation has unveiled a new software solution that automatically builds accurate 3D site models of large urban environments in less than a day. MapIt!™ software utilizes aerial imagery and Light Detection and Ranging (LIDAR) to generate a continuous large-area 3D site model. This information can be used to provide military units and intelligence analysts with critical site data for an urban area as large as 800 square kilometer in around four days as opposed to the 40 to 60 days required for current urban modeling techniques.

The ability of the tool to rapidly build precise 3D models in days helps to increase the military’s situational awareness of an urban environment before boots hit the ground. In addition to its use by the military, MapIt!’s ability to combine the high-range resolution of LIDAR with the spatial resolution of aerial images makes it the perfect solution for users who need quick site models for activities including emergency response planning, wide area assessments and environmental and planning studies.

More information:


22 October 2007

Heritage Guides for Mobile Phones

An Italian-led research project is developing a service that allows visitors to use their camera-equipped 3G mobile telephones to get a personalised multimedia guide to archaeological sites and museums. A tour of a big outdoor cultural site can sometimes be a frustrating experience if objects are not easily located, identified or placed in historical context. In particular, the Agamemnon IST-funded project is working on an interactive multimedia system that provides relevant text, videos, speech and pictures with 3D reconstructions, to visitors' mobile telephones. Agamemnon tailors a visit path based on site visitors’ interests, cultural knowledge and time available. The on-screen itinerary constantly updates as the visitor moves around the site. The system's image-recognition function allows visitors to dial in via a data line, photograph objects they are interested in and receive information about them. Agamemnon also takes voice commands.
The system's software was developed from scratch, based on a Java Enterprise backbone with JavaBean components. They are currently testing the research prototype in pilot sites in Paestum (Italy) and Mycenae (Greece). Agamemnon works on visitors' personal telephones, so customers don't need to rent devices, such as CD or cassette players, or learn how to use them, and institutions don't have to invest in or maintain a stock of electronic devices. The system works over existing UMTS, GPRS and GSM networks, so institutions don't have to invest in wireless networks, such as WiFi. Traffic-sharing agreements between sites, museums and 3G mobile phone operators could bring in new revenue for cultural institutions, reducing strain on public finances, and also boost income for networks. In addition, it is estimated that the Agamemnon service could attract 5% more visitors per year to sites and museums.

More information:


18 October 2007

AR Interfaces Using Digital Content

Yesterday afternoon, I have presented an invited poster with title ‘AR Interfaces Using Digital Content’ to an event organised by London Technology Network (LTN) at Royal College of Obstetricians and Gynaecologists, Regent's Park, London. The theme of the event was ‘Emerging technologies for interpreting and using digital content’ aiming at exploiting meaning from text, image and audio. An overview of the poster I presented is shown below, illustrating how augmented reality interfaces can use digital content to assist companies and governmental organisations perform efficient applications and provide advanced services.

The objective of the event was to blend together participants coming from academy, government and industry using seminars (presentations), showcases (posters) and networking sessions (one-to-one meetings). Numerous academic posters presented state-of-the-art research solutions for discovering how digital content search can be automated and effectively integrate language, images and music and determining what technologies companies are investing in and hear about their future outlook. Finally, the event provided an overview of the latest challenges and new applications for Natural Language Processing and an overview of how to balance new device functionalities with creating a compelling user experience.

More information:


16 October 2007

5DT Data Glove 5 Ultra

The 5DT Data Glove 5 Ultra is the world's number one selling data glove which can be used for virtual and augmented reality applications. The unit provides a wealth of features in a very comfortable package and costs around £650. 5DT Data Glove 5 Ultra is designed to satisfy the stringent requirements of modern Motion Capture and Animation Professionals in a wide range of applications including serious gaming. It offers comfort, ease of use, a small form factor and multiple application drivers. The high data quality, low cross-correlation and high data rate make it ideal for realistic real-time animation.

In terms of operation, the 5DT Data Glove 5 Ultra measures finger flexure (1 sensor per finger) of the user's hand. The system interfaces with the computer via a USB cable. A Serial Port (RS 232 - platform independent) option is also available through the 5DT Data Glove Ultra Serial Interface Kit. It features 8-bit flexure resolution, extreme comfort, low drift and an open architecture. The 5DT Data Glove Ultra Wireless Kit interfaces with the computer via Bluetooth technology (up to 20m distance) for high speed connectivity for up to 8 hours on a single battery and comes in right- and left-handed models.

More information:


11 October 2007

Internet Map

It took two months and nearly 3 billion electronic probes for researchers to create a map of the Internet. The Internet census comes from the University of Southern California's Information Sciences Institute in Marina del Rey, Calif. Over two months, ISI computers sent queries to about 2.8 billion numeric "Internet Protocol," or IP, addresses that identify individual computers on the Internet. Replies came from about 187 million of the IP addresses, and researchers used that data to map out where computers exist on the Internet. At one dot per address using a typical printer, the resulting map was about 9 feet by 9 feet. The top finally was taped onto the 8-foot-high ceiling. A condensed version squeezes about 65,000 addresses into a dot, with brighter colors used to show ranges of numbers where a greater number of computers exist.

The figure above shows our map of the allocated address space. They layout follows Randall Munroe's hand-drawn map of allocated Internet address blocks from xkcd #195. The one-dimensional, 32-bit addresses were converted into two dimensions using a Hilbert Curve. This curve keeps adjacent addresses physically near each other, and it is fractal, so we can zoom in or out to control detail. Understanding how addresses are used influences many aspects of the Internet. Routers are more efficient when they serve subnets with addresses with common prefixes. Worms explore the address space at random. Individuals use more addresses as they use the net in new ways, from more computers to mobile telephones or embedded devices.

More information:


10 October 2007

KTN Flagship Projects Open Day

Today I have presented one of the major components of the LOCUS project, the AR Interface, at an event called ‘Flagship Projects Open Day’ at the National Physical Laboratory. This full-day event presented three major research projects which reached their conclusion including: SPACE, LOCUS and AutoBAHN. A screenshot that illustrated the digital compass used in the sensor-based AR solution is provided below.

Moreover, the event provided a valuable insight into the outcomes of the research as well as a forum for discussions about future work and research direction. Other speakers from their project teams presented their findings, provided demonstrations, and outlined future plans to continue the work.

A draft copy of the presentation can be downloaded from here.

02 October 2007

Library Hi Tech Article

A few months ago, Library Hi Tech published in a special issue on 3D visualisation an article I co-authored at City University. The paper presents how two interactive mobile interfaces were designed and implemented following a user centred approach. The first interface makes use of 2D digital technology such as different representations of 2D maps and textual information. To enhance the user experience during navigation, location aware searches may be performed indicating information about the surroundings. The second interface makes use of virtual reality (VR) and computer graphics to present 3D maps and textual information. The VR maps are also interactive and contain hyperlinks positioned in 3D space which link to either WebPages or other multimedia content.

Both interfaces allow users to visualise and interact with different levels of representation of urban maps as it can be shown from the map interface in the above screenshots. Initial evaluation has been performed to test the usability of the 2D interface, and limitations of the 2D technology were recorded. To overcome these limitations and explore the potentials of alternative technologies a mobile VR interface, called Virtual Navigator, was prototyped and a pilot evaluation was conducted. From the findings, it was obtained that as more and more people make use of mobile technologies and advanced interfaces to enhance access to location-based services, prototype interfaces for personal digital assistants that provide solutions to urban navigation and wayfinding are extremely beneficial.
A draft version of the article can be downloaded from here.

29 September 2007

Open Source Physics Engines

Computer games make use of physics engines to represent realistically a 2D/3D environment and thus make the player more immersed into the environment. Some of the most popular open-source physics engines are listed below:

AGEIA is dedicated to delivering dynamic interactive realism to the ever demanding complexity of next generation games. Its flagship solution, AGEIA PhysX, is the world's first dedicated physics engine and physics processor to bridge the gap between static virtual worlds and responsive unscripted physical reality. AGEIA PhysX allows developers to use active physics-based environments for a truly realistic entertainment experience.

Bullet is a 3D Collision Detection and Rigid Body Dynamics Library for games and animation. Free for commercial use, including PlayStation 3, Open Source multiplatform C++ under the ZLib License. Discrete and continuous collision detection, integrated into Blender 3D and COLLADA 1.4 Physics tools support. Bullet features collision shapes include: Sphere, box, cylinder, cone, convex hull, and triangle mesh. It implements GJK convex collision detection and swept collision test. It also supports continuous Collision Detection and constraints.

Chrono::Engine is a multi-body dynamics engine, aimed at providing high-performance simulation features in C++ projects. CHRONO::ENGINE can perform dynamical, kinematics and static analyses for virtual mechanisms built of parts such as actuators, motors, constraints between parts, spring, dampers, etc. Applications will be able to simulate a wide set of mechanisms: cars, robots, trucks, trains, car suspensions, earth-moving machines, motorscrapers, backhoe loaders, human skeletons, aereospatial devices, landing gears, robotic manipulators, engines, torque converters, prosthetic devices, artificial arms, miniaturized mechanisms for tape recorders, camcorders, etc.

DynaMechs is a cross-platform, object-oriented, C++ class library that supports dynamic simulation of a large class of articulated mechanisms. From simple serial chains to tree-structured articulated mechanisms (including the simpler star topologies) to systems with closed loops. Code to compute approximate hydrodynamic forces are also available to simulate underwater robotic systems of this class including submarines (ROVs, AUVs, etc.) with one or more robotic manipulators. Joint types supported include the standard revolute and prismatic classes, as well as efficient implementations (using Euler angles or quaternions) for ball joints.

DynaMo is a software library providing classes that takes care of the calculation of the motions of geometries moving under the influence of forces and torques and impulses. In addition, the library can also compute forces for you through the mechanism of constraints. These allow you to easily connect geometries to each other in various ways. A constraint only has to be specified once, and the Dynamo library will continually enforce it from that moment on by applying the required reaction forces. The Dynamo library is released under the terms of the GNU Library General Public License.

FastCar library was designed by people with great experience in multi-body dynamics. The authors previously built a fast, versatile and very elaborate multi-body dynamics package that was used for many applications including games and vehicles. However, experience showed that a complex general purpose physics package and a versatile and efficient vehicle simulator for games are two very different things; so the decision was taken to build a separate small package for simulation of vehicles having speed and simplicity in mind.

Newton is an integrated solution for real time simulation of physics environments. The API provides scene management, collision detection, dynamic behavior and yet it is small, fast, stable and easy to use. Newton implements a deterministic solver, which is not based on traditional LCP or iterative methods, but possesses the stability and speed of both respectively. This feature makes Newton a tool not only for games, but also for any real-time physics simulation.

ODE is an open source, high performance library for simulating rigid body dynamics. It is fully featured, stable, mature and platform independent with an easy to use C/C++ API. It has advanced joint types and integrated collision detection with friction. ODE is useful for simulating vehicles, objects in virtual reality environments and virtual creatures. It is currently used in many computer games, 3D authoring tools and simulation tools.

OpenTissue provides generic algorithms and data structures for rapid development of interactive modelling and simulation. OpenTissue works as a foundation for research and student projects in physics-based animation at the Department of Computer Science, University of Copenhagen (commonly known as DIKU). OpenTissue is free for commercial use, open source under the ZLib License.

PAL (physics abstraction layer) provides a unified interface to a number of different physics engines. This enables the use of multiple physics engines within one application. It is not just a simple physics wrapper, but provides an extensible plug-in architecture for the physics system, as well as extended functionality for common simulation components. PAL does not restrict you to one particular physics engine. Finally, PAL has an extensive set of common features such as simulating different devices or loading physics configurations from XML or COLLADA files.

Physsim is a C++ rigid-body dynamics simulation library. It has been developed for two purposes: (a) to provide a stable, flexible platform for research into rigid body simulation and (b) to supply roboticists with state-of-the-art tools in robotic simulation. Rigid body simulators can be measured in three ways: speed, accuracy, and stability. Speed is important so that complex environments can be simulated in real-time. Accuracy implies that the simulator reflects the physical phenomena of the real world. Stability (or instability) is an artefact of the numerous approximations made in rigid body simulation. Rigid body simulators currently must balance these three factors.

Pulsk investigates novel methods for simulation of rigid bodies that integrate well with stacking situations. This work focuses on impulse-based simulation techniques with physical interactions such as collision, contact and friction in relatively complex scenes: large number of stacked objects, sliding objects, highly dynamical scenes with non-convex bodies. Pulsk aims the work to be applicable to real-time applications such as games, therefore some small approximations in the algorithms are allowed.

SPE (Simple Physics Engine) is a lightweight but still powerful physics engine for games and virtual reality programs. SPE includes the following features: Uniform Tri-Mesh collision detection algorithm; Collision data analysis; Stable solver; Joint; Breakable RigidBody; High Parallel Computation and an easy to use interface.

Tokamak Game Physics SDK is a high performance real-time physics library designed specially for games. It has a high-level, simple to use programming interface. Tokamak features a unique iterative method for solving constraint. This allows developers to make the trade-off between accuracy and speed, as well as providing more predictable processor and memory usage. Currently, Tokamak provides collision detection for primitives (box, sphere, capsule), combination of primitives, and arbitrary static triangle mesh as well as convex to convex collision detection.

True Axis Physics SDK is a fast and solid real world physics simulation system designed for demanding games and virtual interactive environments. The SDK aims in avoiding common issues present in most physics and collision implementations and give developers the control they need over the way objects behave. The SDK features swept collision detection, allowing it to handle rapidly changing environments far more effectively than other, non-swept based, physics systems. True Axis will seamlessly handle collisions between many high velocity entities, such as speedy vehicles or missiles, with out letting them become intersected.

26 September 2007

Halo III

Halo III (also known as Halo 3) launched in Europe at midnight on Tuesday (25 September 2007) and it is the third chapter in the highly successful and critically acclaimed Halo franchise. Halo III is one of the most anticipated and heavily marketed titles in history. It represents the third chapter in the Halo trilogy—an international award-winning action series that grew into a global entertainment phenomenon, selling more than 14.5 million units worldwide, logging more than 650 million hours of multiplayer action on Xbox LIVE®, and spawning action figures, books, a graphic novel, apparel, an upcoming film adaptation, and more.

In the UK more than 1,000 shops opened at midnight so gamers could get their hands on the title. Many gamers started queuing outside shops in the afternoon to ensure they got hold of a copy. The Xbox 360 game is Microsoft's key weapon in the console wars with Sony and Nintendo. Microsoft hopes day one sales will top £70m ($140m), more than the opening takings of any movie in history. Microsoft needs Halo III to boost sales of the Xbox 360; despite investing billions of dollars into the Xbox project it has yet to see any meaningful profitable return. More than a million people pre-ordered the game, which is the concluding part of a science fiction trilogy that tells the story of a super soldier, called Master Chief, who is leading the fight to save humanity from an alien collective, called the Covenant. The game has become a major entertainment franchise in recent years - with spin-off games, clothing, novels and action figures all available.

More information:




22 September 2007

Aslib Proceedings Journal Article

Last month, Aslib Proceedings has published a journal article I co-authored with a colleague at City University with title ‘Mixed reality (MR) interfaces for mobile information systems’. The paper presented some of the results obtained from the LOCUS research project. The purpose of this paper was to explore how mixed reality interfaces can be used for the presentation of information on mobile devices. The motivation for this work is the emergence of mobile information systems where information is disseminated to mobile individuals via handheld devices. The LOCUS project is extending the functionality of the WebPark architecture to allow the presentation of spatially referenced information via these mixed reality interfaces on mobile devices.

In particular, the LOCUS system is built on top of the WebPark mobile client-server architecture which provides the basic functionality associated with LBS including the retrieval of information based upon spatial and semantic criteria, and the presentation of this information as a list or on a map (top images). However, the LOCUS system extends this LBS interface by adding a VR (bottom left image) and an AR interface (bottom right image). We strongly believe that the most suitable interface for mobile information systems is likely to be user and task dependent, however, mixed reality interfaces offer promise in allowing mobile users to make associations between spatially referenced information and the physical world.

The abstract of the paper can be found online at:


Also a draft version can be downloaded from here.

19 September 2007

DigitalGlobe Launch

The WorldView-1 satellite was launched on Tuesday, September 18, 2007, from Vandenberg Air Force Base in California. WorldView-1 is the first of two new next-generation satellites DigitalGlobe, a leader in the global commercial Earth imagery and geospatial information market, plans to launch. Shortly after the launch, a DigitalGlobe ground station received a downlink signal confirming that the satellite successfully separated from its launch vehicle and had automatically initialized its onboard processors. WorldView-1 is currently undergoing a calibration and check-out period and will deliver imagery soon after. First imagery from WorldView-1 is expected to be available prior to October 18, the six-year anniversary of the launch of QuickBird, DigitalGlobe’s current satellite.

WorldView-1, built by Ball Aerospace and Technologies Corporation with the imaging sensor provided by ITT Corporation, is a high-capacity, panchromatic imaging system featuring half-meter resolution imagery. With an average revisit time of 1.7 days, WorldView-1 is capable of collecting up to 750,000 square kilometres (290,000 square miles) per day of half-meter imagery. Frequent revisits will increase image collection opportunities, enhance change detection applications and enable accurate map updates and will provide more accurate data to Google Earth. The satellite is capable of collecting; storing and down linking more frequently updated global imagery products than any other commercial imaging satellite in orbit, allowing for expedited image capture, processing and delivery to customers where speed is a driving factor. WorldView-1 is equipped with state-of-the-art geo-location accuracy capability and exhibits unprecedented agility with rapid targeting and efficient in-track stereo collection.

More information:


17 September 2007


Technology that translates spoken or written words into British Sign Language (BSL) has been developed by researchers at IBM. In particular a software system, called SiSi (Say It Sign It), was created by a group of students in the UK. SiSi brings together a number of computer technologies. A speech recognition module converts the spoken word into text, which SiSi then interprets into gestures that are used to animate an avatar which signs in BSL. The main of SiSi is to enable deaf people to have simultaneous sign language interpretations of meetings and presentations based on speech recognition to animate a digital character or avatar.

IBM says its technology will allow for interpretation in situations where a human interpreter is not available. It could also be used to provide automatic signing for television, radio and telephone calls. It is worth-mentioning that the concept has already gained the approval of the Royal National Institute for Deaf people (RNID). The students used two signing avatars developed by the University of East Anglia. One of them signs in BSL and the other uses Sign Supported English - a more direct translation using conventional syntax and grammar.

13 September 2007

Serious Virtual Worlds ‘07

Today I have attended the first day of the Serious Virtual Worlds ‘07, the First European Conference on the Professional Applications on Virtual Worlds (13-14 September 2007) held at Coventry TechnoCentre. The theme for this first Serious Virtual Worlds conference is ‘The Reality of the Virtual World' and takes a close look at how virtual worlds are now being used for serious professional purposes. Many organisations are now actively researching and deploying virtual worlds. Serious Virtual Worlds is a good introduction to the serious uses of virtual worlds. This was driven by the extraordinary success of virtual worlds such as ‘Second Life’ (see screenshot below) as virtual social spaces for play leads to the question ‘What is the potential for the serious uses of these worlds?’

The theme of the first day is ‘Introducing Virtual Worlds’. A number of presentations and conversations introducing virtual worlds and the 3D web from Cisco, Linden Labs, TruSim, Forterra, Giunti Labs, Pixel Learning, Caspian, Ambient Performance and Daden, closing with the launch of the Serious Games Institute’s ‘Second Life’ Island with a Cocktail Reception followed by the Conference Dinner. The theme of the second day, is ‘Serious Virtual Worlds: Action & Potential’ including live virtual world presentations and conversations from Digital Earth, Reuters, Stamford Medical School, TruSim, PA Consulting, IBM, Forterra, NPL, Logicom, and AVM.

More information:


08 September 2007

Deep Exploration

Right Hemisphere's Deep Exploration Standard Edition is a software tool that allows the quick production of multimedia 3D graphics. Unlike typical multimedia 3D tools, Deep Exploration Standard Edition is a unified application that works across a number of multimedia formats and 3D graphics styles so it is easy to take advantage of existing content and avoid big changes in file formats. The main operations that Deep Exploration performs includes: (a) translate 2D and 3D graphics and multimedia files, (b) search, view, and markup 3D graphics and (c) author, render, and publish 3D images and animations.

The above screenshot shows a 3D model representing City University’s campus and it was used as part of the LOCUS project. This 3D model (or 3D map) was originally generated in 3D Studio Max and then converted into two different formats used for mobile navigation in personal digital assistants (PDAs). More specific, the 3D map was converted in VRML for the Virtual Navigator interface and to DirectX for the MD3DM VR interface.

More information:


01 September 2007

IJAC Journal Article

Last June, the International Journal of Architectural Computing (IJAC) has published a journal paper I co-authored with colleagues from the Centre for VLSI and Computer Graphics at the University of Sussex and the Ename Center, in a Special Issue on Cultural Heritage. The title of the article is ‘Multimodal Mixed Reality Interfaces for Visualizing Digital Heritage’ and the main aim of the paper is to provide several different and interesting types of virtual heritage exhibitions by utilising Web3D, virtual and augmented reality technologies for visualizing digital heritage in an interactive manner through the use of several different input devices. A high-level diagram illustrating the technologies employed in the multimodal mixed reality system is presented below.

The novelty of the technologies employed is that they allow users to switch between three different types of visualization environments including: the web in the traditional way, but including 3D, virtual reality and augmented reality. Additionally, several different interface techniques can be employed to make exploration of the virtual museum that much more interesting. In particular the architectural diagram illustrates several interaction modes from use of SpaceMouse and gamepad through to use of a physical replica of an artefact, and simple hand manipulation of AR marker cards. In addition, several visualization scenarios are provided ranging from the familiar web page but with 3D object, a virtual gallery environment, a person using a physical replica of an artefact to control and explore the virtual artefact, and several augmented reality examples.

A draft version of the article can be downloaded from here.

31 August 2007

Polhemus Patriot

Polhemus PATRIOT is a cost-effective solution for 6 Degree-Of-Freedom (DOF) tracking and 3D digitizing. Patriot can accommodate a wide array of indoor applications such as head tracking, biomechanical analysis and computer graphics to cursor control and stereotaxic localization. PATRIOT includes a system electronics unit (SEU), a power supply, one sensor and one source. However, it is possible to expand the system’s capabilities simply by adding an additional sensor or an optional stylus. Measuring in at only 6.75 by 6.25 by 1.75 inches (LWH), the electronics unit is compact for easy installation in any environment. PATRIOT interfaces with the host computer via RS-232 or USB 1.1 and it's fully compatible with Windows® XP, Windows 2000 and Linux®. The source and sensor contain electromagnetic coils enclosed in plastic shells. The source emits magnetic fields, which are detected by the sensor. The sensor’s position and orientation are precisely measured as it is moved. Because the sensor is completely passive, it's safe for use in any application.

PATRIOT provides dynamic, real-time measurements of position (X, Y and Z Cartesian coordinates) and orientation (azimuth, elevation and roll). PATRIOT can update data continuously, discretely (point by point), or incrementally. With the optional stylus, you can trace the outline of a physical object or collect polygon facets and get pinpoint accuracy of unlimited X, Y and Z data points. PATRIOT offers low latency and high stability to ensure precise, uninterrupted tracking at all times. It boasts an update rate of 60Hz per sensor and has a range of five feet, resolution of 0.0015 inch and 0.1 degree, and static accuracy of 0.1 inch RMS for the X, Y, Z position and .75 degrees RMS for orientation. Latency is less than 18 milliseconds for both sensors simultaneously.

More information:


27 August 2007

Sensor-based Mixed Reality

During June and July 2007, I have co-supervised with members of the Cogent research group and the department of Creative Computing, a research project regarding a mixed reality audio-visual visualisation and localisation interface. The project has been developed over a six-week period by three students and operates within a room which is equipped with fixed-location wireless sensing devices called gumstix. These nodes are multi-modal although the system makes use of the microphone only. The main objective of the project is to display a 3D representations of the audio data contained inside the room blended with 3D information. The overall architecture of the sensor-based MR interface is presented below.

This project has been designed to test at least some aspects of the mixed reality presentation system, using easily available sensors and display devices. MR presentation of the sound field occurs within a 3D computer model of the room in which the sensors are located. It can take a variety of forms from a sound 'mist' to 'objects' representing the sound, which hang in space. Computer-vision registration is achieved through the capabilities of ARTag and ARToolKit and the best available marker is selected using confidence levels. Finally, in terms of localisation, the sensors calculate the location of a sound before drawing the object in 3D space.

More information and a demo video of this work can be found at:

21 August 2007

3D Mouse Navigation

3Dconnexion has made powerful 3D navigation accessible and affordable for architects, artists, students and anyone else who wants to enjoy the 3D experience. In particular, 3Dconnexion devices are ideal sensors for navigation into virtual and augmented reality environments, mainly because they provide intuitive manipulation in six degrees-of-freedom. A characteristic example is SpaceNavigator™ which is shown below.

3Dconnexion's SpaceNavigator™ is a very cheap solution to 3D navigation. It works with more than 100 of today's most popular and powerful 3D applications and it is ideal for Google Earth version 4 and Google SketchUp. An alternative and a bit more expensive solution is SpacePilot which is illustrated below.

The 3Dconnexion SpacePilot, connects the users to the 3D design process in a different way that the standard mouse does. Its optical sensor technology and ergonomic design combine to deliver unprecedented control and fewer distractions. The SpacePilot may be used in user’s non-dominant hand to position, rotate, pan and zoom a model in one single, fluid motion. It can be also used in conjunction with the standard mouse w to simultaneously edit the model or select menu items.

More information:


01 August 2007

OQO model 2 Handheld PC

OQO model 2 is another ultra mobile computer (it is the main rival of the SONY VAIO UX model) which is ideal for a number of everyday applications including navigation and wayfinding, location-based services as well as many more. The model 2 comes standard with WiFi 802.11a/b/g and Bluetooth 2.0 technology, 60GB hard drive, 5-inch Wide VGA LCD display and a 1.2GHz or a 1.5GHz VIA C7M ULV processor with integrated graphics chipset.

The OQO model 2 includes a keyboard that features a track stick mouse pointer for precise cursor movement with your thumb and dedicated zoom keys for quickly changing screen magnification that allow full interactivity while zoomed, and support 1000x600 and 1200x720 interpolated modes. The main flaws of the model 2 are that it lacks a digital camera which is nowadays a standard feature with any mobile devices (smartphones, PDAs and other handhelds) and there is no built-in memory card reader.

More information:


23 July 2007

Gumstix Verdex

Gumstix has launched the 3G of its gumstick-shaped SBC (single-board computer) line. The tiny, Linux-friendly, PXA270-powered "Verdex" SBC offers 50 percent more processor speed and twice the memory of earlier models, and features an enhanced expansion bus, according to the company. In addition to its Marvell (formerly Intel) PXA270 (aka "Bulverde") processor, clocked at up to 600MHz, the new Verdex SBC integrates up to 128MB of RAM and 32MB of flash memory soldered onboard, the company said. Other enhancements over previous Gumstix SBCs include support for USB host interfaces, inputs for CCD (charge-coupled device) cameras, and better power management. Also available is an option for on-board Bluetooth, with a u.fl antenna connector, as shown on the Verdex XM4-bt board shown below.

The Verdex maintains the 3.2 x 0.8 x 3.2-inch (80 x 20 x 8mm) of Gumstix's earlier SBC generations. In addition to matching the dimensions of its predecessors, the Verdex retains the 60-pin board-to-board connector of the first generation SBCs (now referred to as "Basix"), which should enable it to support existing 60-pin expansion cards; these include audio I/O, digital I/O, various microcontroller co-processors for robotics applications, and serial and USB expansion. The Verdex replaces the 92-pin I/O expansion connector introduced with the second-generation SBCs ("Connex"), however, with a pair of connectors. The combination of a 24-pin flex connector and separate 120-pin connector will support a range of new expansion boards, including much sought-after USB host ports. On the software side, the Verdex SBC comes preinstalled with the latest Linux 2.6 operating system as well as the open source U-Boot bootloader. Some of the potential applications include position-tracking applications. At Coventry University, we are developing a sensor-based mixed reality platform based on Gumstix technology.

More information:


21 July 2007

Sharp Zaurus SL-C3200

The Sharp Zaurus SL-C3200 is the sixth generation of the groundbreaking SL-C line of LINUX-based PDAs. The internal 6 GB drive offers unparalleled storage and opens up new options--including using the Zaurus as a portable media player. The C3200 has both compact flash and SD slots. And, since the Zaurus's HDD is plug-n-play recognized by Windows, moving data is by USB 2.0 connection is fast and easy. Rather than running a somewhat limited PDA operating system, it runs Linux which means the CPU and RAM are the only real limit for running Linux applications. Linux apps must be recompiled to run on the Zaurus but that's not a daunting task, and we've seen many useful ports and open source software for the Z emerge over the years. In fact, there are several ROMs available for the Z as well.

The Zaurus runs Lineo Linux with kernel 2.4.20 and Qtopia window manager. It includes a 3.7" VGA (640x480) screen including an optional VGA-out connection. There is also a built-in zoom function that allows zooming into the screen in five increments. Each step is larger than the previous, but none suffers any loss of quality. The screen's orientation automatically adjusts when swiveled. It's a very impressive design. Similar to the HTC Universal models, the swiveling screen transforms the shape of the Zaurus from PDA-style to laptop-style. Other technical specs include an Intel XScale PXA270 processor (416 MHz), 64mb RAM, and 128mb flash ROM. The two slots (1 CF, 1 SD) are I/O, so they support wireless connectivity (such as Bluetooth or Wi-Fi). There is a stereo-out for MP3 playback. The Zaurus SL-C3200 measures 4.9x3.4x1.0 inches (124x87x25mm), and weighs 0.65 pounds (298g).

More information:



15 July 2007

GPS-enabled shopping application

GPS is becoming more and more popular for commercial applications. Recently, the US mobile operator Sprint (NYSE: S) and New York City based start-up GPShopper today announced the launch of Slifter, a new mobile local search application that employs GPS to find products at neighboring retail locations. Users with compatible handsets simply enter a keyword, product name, model number or UPC code to find a product.

Users can then visualize important information about the product such as availability, price and promotional information, at the nearest locations. 85 million products are available at more than 30,000 retail stores across the United States. Slifter uses TeleAtlas map data and the location functionality is enabled by an ESRI middleware. It cost US$1.99 per month to Sprint subscribers (data consumption is not included).

More information:


12 July 2007

Ubiquitous MR for Urban Transport

Yesterday (11th July 2007), I presented an invited poster with title ‘Ubiquitous Mixed Reality for Urban Transport’ to an event organised by London Technology Network (LTN) at One St George Street, Westminster, London. The theme of the event was ‘Developing an Integrated Sustainable Transport System’ aiming at assessing the latest advancements in energy, materials and modelling for transport. An overview of the poster I presented is shown below, illustrating how ubiquitous mixed reality technologies can be used to assist companies and governmental organisations perform efficient urban transport planning and provide advanced services.

The objective of the event was to blend together participants coming from academy, government and industry using seminars (presentations), showcases (posters) and networking sessions (one-to-one meetings). Numerous academic posters presented state-of-the-art research solutions for future transport systems including both engineering and computer science disciplines. The most characteristic areas covered mechanical solutions, robotics and electronics, mobile computing, computer vision and mixed reality applications.

More information:


08 July 2007

Location Based Gaming (LBG)

The use of global positioning technology in gaming is giving developers a new area to explore with Location Based Gaming (LBG). The toys use various technologies, including GPS, motion tracking, large-scale video projection and Bluetooth. The E911 directive and the rise in distribution of GPS-enabled handsets introduced the idea that location is the next big thing. Geo-caching, a real world treasure hunt and Pac-Manhattan, a real-world version of 1980's video game sensation Pac Man are the examples of a 'little' variation in the traditional gaming. Location Based Gaming is a means of playing a video game using technology like Global Positioning Satellites (GPS) that combines player's real world with a virtual world on the handset. The physical location becomes part of the game board allowing the player to interact with his/her physical environment. Players move through the city with handheld or wearable interfaces. Sensors capture information about the players’ current context, which the game uses to deliver an experience that change according to their locations and actions. In collaborative games, this information is transmitted to other players, on the streets or online.

LBG might include tracking a phone as it moves through a city during a treasure hunt, changing the weather in the game to match the weather in the players’ location, or monitoring players’ direction, velocity and acceleration during a high-intensity “fight”. The location technology also enables bonus features like challenging players close to one’s location for the ultimate fight or seeing comparative scores by vicinity. The net result is a game that interleaves a player’s everyday experience of the city with the extraordinary experience of a game. When Sony added GPS functionality to its flagship gaming console, PSP, it raised the bar for the designers. Nintendo, X box, Gizmondo and many others were quick to follow the suit. Now the gaming software developers are expected to come up with games that blur the edges between the virtual world and the real one. Of course, enabling global positioning technology will also materialize the idea of integrating standard navigation features and geo-tagging in gaming devices, but their survival will hugely depend on the ultimate gaming experience they are expected to deliver. Some of the most characteristic LBGs include: ‘Wall Street Fighter’, ‘Can You See Me Now?’, Swordfish and ‘Torpedo Bay’ and ‘Tourality’.

Wall Street Fighter - The latest from YDreams Wall Street Fighter, powered by KnowledgeWhere’s Location Application Platform (LAP), is a location-based game (LBG) where the world of business works as the backdrop for some fun fighting antics. The objective of the game is to make it to the top of the business food-chain by fighting everybody at the Bonds Office. The location based features include Location-based scenarios that change with your real location, multiplayer game that allows the player to challenge players close to his/her location for the ultimate fight and location-based rankings that show comparative scores by vicinity. The game was a finalist in the NAVTEQ LBS challenge under Entertainment & Leisure Applications category.

Can You See Me Now? - Performed by Blast Theory, a UK based adventurous artists groups using interactive media, Can You See Me Now? is an artistic performance in the form of a game in which online players are chased across a virtual city by three performers who were running through the actual city streets. The concept for CYSMN is a chase game, played online and on the streets. Blast Theory's players are dropped at random locations into a virtual map of a city. Tracked by satellites, professional runners appear online next to your player. The runners use handheld computers showing the positions of online players to guide them in the chase. Online players try to flee down the virtual streets, send messages and exchange tactics with other online players. If a runner gets within 5 metres of you, a sighting photo is taken and the game is over. Can You See Me Now? Won the Golden Nica for Interactive Arts at the 2003 Prix Ars Electronica and was nominated for a BAFTA Award in 2002.

Swordfish and Torpedo Bay - Blister, a wholly owned subsidiary of Canadian firm Knowledge Where Corp. published location-based game called Swordfish on the Bell Mobility network across Canada in July 2004 and later on Boost Mobile. To play Swordfish, a location-based fishing game, the player uses his/her mobile phone to find virtual fish and go fishing. Using Global Positioning Systems (GPS), Swordfish simulates a deep sea fishing experience on a mobile phone turning the player’s real world into a virtual ocean. The player has to move around to play this game. Using GPS technology in the mobile phone, the player's position is determined via a fish-finder so that the player can see where the nearest school of virtual fish is located in relation to his/her current position. The fish finder also features navigational assistance by providing the direction of the closest school of fish and an optional localized street map of your current location with virtual schools of fish. Also by Blister Torpedo Bay is a location-based naval battle game in which the player uses mobile phone to shoot various aircraft carriers, destroyers and submarines. The game uses Location Application Platform (LAP) that allows users from multiple carriers and multiple networks to interact within the same gaming environment. To tackle the problem of GPS and A-GPS signal fading Torpedo Bay implements predictive positioning algorithms that improve the accuracy and availability of GPS locates within problematic areas. Apart from that the game uses real map data to assist in the locating of enemy ships, weapons, and health.

Tourality - Currently available in Austria, Tourality is mobile game that combines sporty outdoor activity with virtual gaming experience. The challenge before player is to reach geographically defined spots in reality as fast as possible. Player's movement directly influences the gaming progress. To play Tourality the player would require a mobile phone that supports Java and Bluetooth GPS receiver. The player will also require an internet connection (GPRS/UTMS connection) of the mobile network operator. The player equipped with a mobile phone and a Bluetooth GPS receiver has to reach spots before his/her opponents. A spot is a certain point on a virtual map that the player has to reach in reality. The player's real position is transmitted from the Bluetooth GPS receiver to the player's mobile phone and is shown on the display. Tourality shows the position of all participating players as well as the spots to reach on the player's mobile phone. The player will know the spots still to reach and their location.

More information:

01 July 2007


Apple’s revolutionary iPhone™ went on sale in USA last Friday, June 29 at 6:00 p.m. local time at Apple® retail stores nationwide. iPhone is a revolutionary new mobile phone which introduces an entirely new user interface based on a revolutionary multi-touch display and pioneering new software. iPhone combines three products into one small and lightweight handheld device—a revolutionary mobile phone, a widescreen iPod®, and the Internet in your pocket with best-ever applications on a mobile phone for email, web browsing and maps. iPhone ushers in an era of software power and sophistication never before seen in a mobile device, which completely redefines what users can do on their mobile phones.

iPhone™ will run applications created with Web 2.0 Internet standards. Developers can create Web 2.0 applications which look and behave just like the applications built into iPhone, and which can seamlessly access iPhone’s services, including making a phone call, sending an email and displaying a location in Google Maps. Third-party applications created using Web 2.0 standards can extend iPhone’s capabilities without compromising its reliability or security. Web 2.0-based applications are being embraced by leading developers because they are far more interactive and responsive than traditional web applications, and can be easily distributed over the Internet and painlessly updated by simply changing the code on the developers’ own servers.

More information:




30 June 2007

HTC Advantage

The latest cutting edge mobile device is the HTC Advantage PDA with high-speed global connectivity and PC-style power. The HTC Advantage is unfortunately based again on Windows Mobile 5.0 operating systems but comes with inbuilt GPS and navigation software. The main advantage of the device is that it comes as two magnetic pieces that ‘glue’ together if required: the main processing unit with a touch screen and a keyboard.

The main processing unit is a 5-inch VGA 640x480 colour touch screen, which is challenging when performing navigation compared to other windows mobile 5.0 PDAs (i.e. MIO A701, HP 6915, HTC Universal). The second component of the PDA is a full detachable keyboard, which allows to be used as a small computing device (when not navigating) and can replace some of the laptop operations. However, when navigating it is much easier to use the virtual keyboard and leave the physical keyboard in your pocket. An example of virtual representation of City University campus running in Pocket Cortona is shown below.

Another advantage of the HTC Advantage is provides excellent communication facilities. Users may connect anywhere with 3G/HSDPA and Wi-Fi®; and stereo Bluetooth® 2.0 which is excellent for wireless audio. The HTC Advantage has up to 8 hours’ battery life and an 8GB hard drive and miniSDTM are the main options for storage. The on-board 3.0 megapixel camera (and second VGA camera) allows for the implementation of augmented reality applications. Finally, the last interesting feature of the device is the VGA output capability.

More information: