27 May 2010

Invisible Touch for Mobile Devices

Today, the way to interact with a mobile phone is by tapping its keypad or screen with your fingers. But researchers are exploring ways to use mobile devices that would be far less limited. Researchers are developing a prototype interface for mobile phones that requires no touch screen, keyboard, or any other physical input device. A small video recorder and microprocessor attached to a person's clothing can capture and analyze their hand gestures, sending an outline of each gesture to a computer display. The idea is that a person could use an ‘imaginary interface’ to augment a phone conversation by tracing shapes with their fingers in the air.

Researchers have built a prototype device in which the camera is about the size of a large broach, but they predict that within a few years, components will have shrunk, allowing for a much smaller system. The idea of interacting with computers through hand gestures is nothing new. Sony already sells EyeToy, a video camera and software that capture gestures for its PlayStation game consoles; Microsoft has developed a more sophisticated gesture-sensing system, called Project Natal, for the Xbox 360 games console. And a gesture-based research project called SixthSense, developed by researchers at MIT uses a wearable camera to record a person's gestures and a small projector to create an ad-hoc display on any surface.


26 May 2010

Realistic Simulation of DNA Unfolding

The separation of the two DNA strands occurs in millionths of a second. Consequently, it is extremely difficult to study this phenomenon experimentally and researchers must rely on computational simulations. After four years of fine-tuning an effective physical model and massive use of the supercomputer Mare Nostrum, researchers at IRB Barcelona and the Barcelona Supercomputing Center (BSC) have managed to produce the first realistic simulation of DNA opening at high resolution. The researchers have studied a small DNA fragment, of 12 base pairs (the human genomes has about 3,000 million base pairs), and have obtained 10 million structural snapshots of how DNA unfolds.

In this process they have revealed the two main ways by which the natural folded structure move to an unfolded state. DNA holds the genetic information of living organisms and its double helical structure was discovered more than 50 years ago by Watson and Crick. DNA and the proteins that modify it are the most important therapeutic targets in several pathologies, and particularly in cancer. The work provides a detailed view of the mechanism through which one of the most crucial processes in DNA occurs, and opens up new prospects regarding the connection between physical properties, functionality and pharmacological effect. The final objective is to achieve that new breakthroughs turn DNA into a universal pharmacological target.

More information:

http://www.sciencedaily.com/releases/2010/05/100520093323.htm

25 May 2010

Cheap Gesture-Based Computing

MIT researchers have developed a system that could make gestural interfaces much more practical. Aside from a standard webcam, like those found in many new computers, the system uses only a single piece of hardware: a multicolored Lycra glove that could be manufactured for about a dollar. Other prototypes of low-cost gestural interfaces have used reflective or colored tape attached to the fingertips, but that’s 2D information. The proposed system can translate gestures made with a gloved hand into the corresponding gestures of a 3-D model of the hand on screen, with almost no lag time. This actually gets the 3D configuration of user’s hand and fingers. The glove went through a series of designs, with dots and patches of different shapes and colors, but the current version is covered with 20 irregularly shaped patches that use 10 different colors. The number of colors had to be restricted so that the system could reliably distinguish the colors from each other, and from those of background objects, under a range of different lighting conditions.

The arrangement and shapes of the patches was chosen so that the front and back of the hand would be distinct but also so that collisions of similar-colored patches would be rare. For instance, the colors on the tips of the fingers could be repeated on the back of the hand, but not on the front, since the fingers would frequently be flexing and closing in front of the palm. Technically, the other key to the system is a new algorithm for rapidly looking up visual data in a database. Once a webcam has captured an image of the glove, the software crops out the background, so that the glove alone is superimposed upon a white background. Then the software drastically reduces the resolution of the cropped image, to only 40 pixels by 40 pixels. Finally, it searches through a database containing myriad 40-by-40 digital models of a hand, clad in the distinctive glove, in a range of different positions. Once it’s found a match, it simply looks up the corresponding hand position. Since the system doesn’t have to calculate the relative positions of the fingers, palm, and back of the hand on the fly, it’s able to provide an answer in a fraction of a second.

More information:

http://web.mit.edu/newsoffice/2010/gesture-computing-0520.html

23 May 2010

Virtual Body Transfer

Altering the normal association between touch and its visual correlate can result in the illusory perception of a fake limb as part of our own body. Thus, when touch is seen to be applied to a rubber hand while felt synchronously on the corresponding hidden real hand, an illusion of ownership of the rubber hand usually occurs. The illusion has also been demonstrated using visuomotor correlation between the movements of the hidden real hand and the seen fake hand. This type of paradigm has been used with respect to the whole body generating out-of-the-body and body substitution illusions. However, such studies have only ever manipulated a single factor and although they used a form of virtual reality have not exploited the power of immersive virtual reality (IVR) to produce radical transformations in body ownership. Researchers shown that a first person perspective of a life-sized virtual human female body appears to substitute the male subjects' own bodies was sufficient to generate a body transfer illusion.

This was demonstrated subjectively by questionnaire and physiologically through heart-rate deceleration in response to a threat to the virtual body. This finding is in contrast to earlier experimental studies that assume visuotactile synchrony to be the critical contributory factor in ownership illusions. This finding was possible because IVR allowed researchers to use a novel experimental design for this type of problem with three independent binary factors: (i) perspective position (first or third), (ii) synchronous or asynchronous mirror reflections and (iii) synchrony or asynchrony between felt and seen touch. The results support the notion that bottom-up perceptual mechanisms can temporarily override top down knowledge resulting in a radical illusion of transfer of body ownership. The research also illustrates immersive virtual reality as a powerful tool in the study of body representation and experience, since it supports experimental manipulations that would otherwise be infeasible, with the technology being mature enough to represent human bodies and their motion.

More information:

19 May 2010

Rudimentary Computer Vision

A conventional object recognition system, when trying to discern a particular type of object in a digital image, will generally begin by looking for the object's salient features. A system built to recognize faces, for instance, might look for things resembling eyes, noses and mouths and then determine whether they have the right spatial relationships with each other. The design of such systems, however, usually requires human intuition: A programmer decides which parts of the objects are the right ones to key in on. That means that for each new object added to the system's repertoire, the programmer has to start from scratch, determining which of the object's parts are the most important. It also means that a system designed to recognize millions of different types of objects would become unmanageably large. Each object would have its own, unique set of three or four parts, but the parts would look different from different perspectives, and cataloguing all those perspectives would take an enormous amount of computer memory.

Researchers developed an approach that solves both of these problems at once. Like most object-recognition systems, their system learns to recognize new objects by being ‘trained’ with digital images of labeled objects. But it doesn't need to know in advance which of the objects' features it should look for. For each labeled object, it first identifies the smallest features it can -- often just short line segments. Then it looks for instances in which these low-level features are connected to each other, forming slightly more sophisticated shapes. Then it looks for instances in which these more sophisticated shapes are connected to each other, and so on, until it's assembled a hierarchical catalogue of increasingly complex parts whose top layer is a model of the whole object. Once the system has assembled its catalogue from the bottom up, it goes through it from the top down, winnowing out all the redundancies. Even though the hierarchical approach adds new layers of information about digitally depicted objects, it ends up saving memory because different objects can share parts.

More information:

http://www.sciencedaily.com/releases/2010/05/100511104633.htm

14 May 2010

AR Interfaces Seminar

On Wednesday 12th May, I gave a keynote in a seminar called ‘Seminars for Success: Augmented Reality’. The seminar was held in Birmingham Science Park Aston, Birmingham, UK. With industry heavyweights such as MINI, Lego and Nissan adopting augmented reality in order to add a new dimension to their promotional campaigns; now is the perfect time to learn more about this nascent technology. Annual revenue Augmented Reality mobile apps: A recent Juniper research poll has predicted that annual revenues from mobile Augmented Reality apps will reach £475 million by 2014, up from less than £650 thousand in 2009.

The title of my keynote was in ‘Augmented Reality Interfaces’ and the focus was in both kiosk and mobile environments. In particular, two case studies were presented as representative examples of both indoor and outdoor environments. The first one was the ARCO project (funded by EU FP5) focused on museum kiosk environments whereas the second one was the LOCUS project (funded by EPSRC) specifically designed for mobile navigation and wayfinding environments. In addition, I presented various applications that have been developed at iWARG including: gaming, music, education and learning and gesture tracking.

More information:

http://s4s-augmented-reality.eventbrite.com/

11 May 2010

3D Occupational Therapy for Children

Researchers of Tel Aviv University's Department of Occupational Therapy in the School of Health Professionals are using a ‘virtual tabletop’ that ‘moves’ kids with disabilities and provides home-based treatments using virtual reality tools. Combining new 3D exercises with 2D graphical movement games already programmed into the tabletop, researchers reports not only success but also enthusiasm among young patients. The virtual tabletop application appealed to children as young as three and as old as 15. The movement-oriented games allowed them to ‘make music’ and reach targets in ways that are normally neither comfortable nor fun in the therapeutic setting.

Coupled with new technology involving 3D Movement Analysis, they hope to develop this virtual tabletop-type game into new and effective therapy treatment regimes. Researchers also plan to analyze brain function using trans-cranial magnetic brain stimulation. Currently, brain function relating to motor activities is analyzed with magnetic resonance imaging (MRI). But many children are too impatient to sit in an MRI machine, so clinicians need a more accurate means of analyzing movement in children with disabilities to develop individualized therapy regimes.

More information:

http://www.sciencedaily.com/releases/2010/04/100427171842.htm

09 May 2010

Eurographics 2010 Articles

Last Tuesday and Wednesday I have presented a poster and an educational paper at the 31st Annual Conference of the European Association for Computer Graphics (Eurographics 2010). The conference took place at Norrkoping, Sweden 4-7 May 2010 and had more than 400 participants. The poster titled ‘Procedural Generation of Urban Environments through Space and Time’ was co-authored with Jeremy Noghany and Eike Andreson.

The poster proposes a set of programmable elements that can be adjusted to accommodate for buildings from a broad range of architectural styles, which can then be incorporated into a larger engines. The educational paper, titled ‘Using Augmented Reality as a Medium to Assist Teaching in Higher Education’ was co-authored with Eike Andreson. The paper describes the use of a high-level augmented reality (AR) interface for the construction of collaborative educational applications that can be used in practice to enhance current teaching methods.

A draft version of the poster can be downloaded from here and for the educational paper from here.