31 October 2007

Wii Nunchuk

The Wii Nunchuk controller is a secondary controller that adds even more innovation to the next generation of gaming, and does it all with less physical movement. Used in conjunction with the standard Wii remote, certain games need the Nunchuk controller for additional control options. Contoured perfectly to fit a player's hand, the Nunchuk controller builds on the simplicity of the Wii Remote controller. The Nunchuk contains the same three-axis motion sensor found in the Wii Remote, but also includes an analog stick, and two buttons to help assist in character movement. Many games will allow you to control your character's movement with the Nunchuk in your left hand, while your right hand is free to execute the action movements with the Wii Remote. For example, the Nunchuk is particularly useful for games like Wii Boxing. You can use the Nunchuk to punch with your weaker hand, while you use the Wii remote to punch and jab with your predominant hand.

In first-person shooters, the Nunchuk controller carries the burden of movement, freeing you to aim and fire using a more natural motion with the Wii Remote. In a football game, you can make your quarterback elusive with the Nunchuk controller while you search for an open receiver to throw to using the Wii Remote. Serious gamers may even want to use two Nunchuk controllers to gain a fierce competitive edge. Because the Wii Remote and Nunchuk controllers are only relatively dependent on each other, players are free to hold them in whichever hand is most comfortable. Perfectly suitable for either right or left-hand use, the Wii Nunchuk controller grants accessibility not often seen in previous game controllers. Also, the Nunchuk controller doesn't need its own power--it plugs into the Wii Remote controller when it's in use. So there's no need to worry about charging or replacing expensive batteries. Adding a Nunchuk to your Wii system will definitely help you open the doors to the next level of gaming, and seriously step up performance. Just be careful not to knock out your significant other, or bruise the dog, severely, while using one, or two Nunchuk controllers.

More information:


28 October 2007

Rapid 3D Urban Modeling Tool

A few days ago, Sarnoff Corporation has unveiled a new software solution that automatically builds accurate 3D site models of large urban environments in less than a day. MapIt!™ software utilizes aerial imagery and Light Detection and Ranging (LIDAR) to generate a continuous large-area 3D site model. This information can be used to provide military units and intelligence analysts with critical site data for an urban area as large as 800 square kilometer in around four days as opposed to the 40 to 60 days required for current urban modeling techniques.

The ability of the tool to rapidly build precise 3D models in days helps to increase the military’s situational awareness of an urban environment before boots hit the ground. In addition to its use by the military, MapIt!’s ability to combine the high-range resolution of LIDAR with the spatial resolution of aerial images makes it the perfect solution for users who need quick site models for activities including emergency response planning, wide area assessments and environmental and planning studies.

More information:


22 October 2007

Heritage Guides for Mobile Phones

An Italian-led research project is developing a service that allows visitors to use their camera-equipped 3G mobile telephones to get a personalised multimedia guide to archaeological sites and museums. A tour of a big outdoor cultural site can sometimes be a frustrating experience if objects are not easily located, identified or placed in historical context. In particular, the Agamemnon IST-funded project is working on an interactive multimedia system that provides relevant text, videos, speech and pictures with 3D reconstructions, to visitors' mobile telephones. Agamemnon tailors a visit path based on site visitors’ interests, cultural knowledge and time available. The on-screen itinerary constantly updates as the visitor moves around the site. The system's image-recognition function allows visitors to dial in via a data line, photograph objects they are interested in and receive information about them. Agamemnon also takes voice commands.
The system's software was developed from scratch, based on a Java Enterprise backbone with JavaBean components. They are currently testing the research prototype in pilot sites in Paestum (Italy) and Mycenae (Greece). Agamemnon works on visitors' personal telephones, so customers don't need to rent devices, such as CD or cassette players, or learn how to use them, and institutions don't have to invest in or maintain a stock of electronic devices. The system works over existing UMTS, GPRS and GSM networks, so institutions don't have to invest in wireless networks, such as WiFi. Traffic-sharing agreements between sites, museums and 3G mobile phone operators could bring in new revenue for cultural institutions, reducing strain on public finances, and also boost income for networks. In addition, it is estimated that the Agamemnon service could attract 5% more visitors per year to sites and museums.

More information:


18 October 2007

AR Interfaces Using Digital Content

Yesterday afternoon, I have presented an invited poster with title ‘AR Interfaces Using Digital Content’ to an event organised by London Technology Network (LTN) at Royal College of Obstetricians and Gynaecologists, Regent's Park, London. The theme of the event was ‘Emerging technologies for interpreting and using digital content’ aiming at exploiting meaning from text, image and audio. An overview of the poster I presented is shown below, illustrating how augmented reality interfaces can use digital content to assist companies and governmental organisations perform efficient applications and provide advanced services.

The objective of the event was to blend together participants coming from academy, government and industry using seminars (presentations), showcases (posters) and networking sessions (one-to-one meetings). Numerous academic posters presented state-of-the-art research solutions for discovering how digital content search can be automated and effectively integrate language, images and music and determining what technologies companies are investing in and hear about their future outlook. Finally, the event provided an overview of the latest challenges and new applications for Natural Language Processing and an overview of how to balance new device functionalities with creating a compelling user experience.

More information:


16 October 2007

5DT Data Glove 5 Ultra

The 5DT Data Glove 5 Ultra is the world's number one selling data glove which can be used for virtual and augmented reality applications. The unit provides a wealth of features in a very comfortable package and costs around £650. 5DT Data Glove 5 Ultra is designed to satisfy the stringent requirements of modern Motion Capture and Animation Professionals in a wide range of applications including serious gaming. It offers comfort, ease of use, a small form factor and multiple application drivers. The high data quality, low cross-correlation and high data rate make it ideal for realistic real-time animation.

In terms of operation, the 5DT Data Glove 5 Ultra measures finger flexure (1 sensor per finger) of the user's hand. The system interfaces with the computer via a USB cable. A Serial Port (RS 232 - platform independent) option is also available through the 5DT Data Glove Ultra Serial Interface Kit. It features 8-bit flexure resolution, extreme comfort, low drift and an open architecture. The 5DT Data Glove Ultra Wireless Kit interfaces with the computer via Bluetooth technology (up to 20m distance) for high speed connectivity for up to 8 hours on a single battery and comes in right- and left-handed models.

More information:


11 October 2007

Internet Map

It took two months and nearly 3 billion electronic probes for researchers to create a map of the Internet. The Internet census comes from the University of Southern California's Information Sciences Institute in Marina del Rey, Calif. Over two months, ISI computers sent queries to about 2.8 billion numeric "Internet Protocol," or IP, addresses that identify individual computers on the Internet. Replies came from about 187 million of the IP addresses, and researchers used that data to map out where computers exist on the Internet. At one dot per address using a typical printer, the resulting map was about 9 feet by 9 feet. The top finally was taped onto the 8-foot-high ceiling. A condensed version squeezes about 65,000 addresses into a dot, with brighter colors used to show ranges of numbers where a greater number of computers exist.

The figure above shows our map of the allocated address space. They layout follows Randall Munroe's hand-drawn map of allocated Internet address blocks from xkcd #195. The one-dimensional, 32-bit addresses were converted into two dimensions using a Hilbert Curve. This curve keeps adjacent addresses physically near each other, and it is fractal, so we can zoom in or out to control detail. Understanding how addresses are used influences many aspects of the Internet. Routers are more efficient when they serve subnets with addresses with common prefixes. Worms explore the address space at random. Individuals use more addresses as they use the net in new ways, from more computers to mobile telephones or embedded devices.

More information:


10 October 2007

KTN Flagship Projects Open Day

Today I have presented one of the major components of the LOCUS project, the AR Interface, at an event called ‘Flagship Projects Open Day’ at the National Physical Laboratory. This full-day event presented three major research projects which reached their conclusion including: SPACE, LOCUS and AutoBAHN. A screenshot that illustrated the digital compass used in the sensor-based AR solution is provided below.

Moreover, the event provided a valuable insight into the outcomes of the research as well as a forum for discussions about future work and research direction. Other speakers from their project teams presented their findings, provided demonstrations, and outlined future plans to continue the work.

A draft copy of the presentation can be downloaded from here.

02 October 2007

Library Hi Tech Article

A few months ago, Library Hi Tech published in a special issue on 3D visualisation an article I co-authored at City University. The paper presents how two interactive mobile interfaces were designed and implemented following a user centred approach. The first interface makes use of 2D digital technology such as different representations of 2D maps and textual information. To enhance the user experience during navigation, location aware searches may be performed indicating information about the surroundings. The second interface makes use of virtual reality (VR) and computer graphics to present 3D maps and textual information. The VR maps are also interactive and contain hyperlinks positioned in 3D space which link to either WebPages or other multimedia content.

Both interfaces allow users to visualise and interact with different levels of representation of urban maps as it can be shown from the map interface in the above screenshots. Initial evaluation has been performed to test the usability of the 2D interface, and limitations of the 2D technology were recorded. To overcome these limitations and explore the potentials of alternative technologies a mobile VR interface, called Virtual Navigator, was prototyped and a pilot evaluation was conducted. From the findings, it was obtained that as more and more people make use of mobile technologies and advanced interfaces to enhance access to location-based services, prototype interfaces for personal digital assistants that provide solutions to urban navigation and wayfinding are extremely beneficial.
A draft version of the article can be downloaded from here.