During June and July 2007, I have co-supervised with members of the Cogent research group and the department of Creative Computing, a research project regarding a mixed reality audio-visual visualisation and localisation interface. The project has been developed over a six-week period by three students and operates within a room which is equipped with fixed-location wireless sensing devices called gumstix. These nodes are multi-modal although the system makes use of the microphone only. The main objective of the project is to display a 3D representations of the audio data contained inside the room blended with 3D information. The overall architecture of the sensor-based MR interface is presented below.
This project has been designed to test at least some aspects of the mixed reality presentation system, using easily available sensors and display devices. MR presentation of the sound field occurs within a 3D computer model of the room in which the sensors are located. It can take a variety of forms from a sound 'mist' to 'objects' representing the sound, which hang in space. Computer-vision registration is achieved through the capabilities of ARTag and ARToolKit and the best available marker is selected using confidence levels. Finally, in terms of localisation, the sensors calculate the location of a sound before drawing the object in 3D space.
More information and a demo video of this work can be found at:
More information and a demo video of this work can be found at: