23 September 2011

AR Gesture Recognition

To make its business software more effective, HP recently paid $10 billion for Autonomy, a U.K. software company that specializes in machine learning. But it turns out that Autonomy has developed image-processing techniques for gesture-recognizing augmented reality (AR). AR involves layering computer-generated imagery on top of a view of the real world as seen through the camera of a smart phone or tablet computer. So someone looking at a city scene through a device could see tourist information on top of the view. Autonomy's new AR technology, called Aurasma, recognizes a user's hand gestures. This means a person using the app can reach out in front of the device to interact with the virtual content. Previously, interacting with AR content involved tapping the screen. One demonstration released by Autonomy creates a virtual air hockey game on top of an empty tabletop—users play by waving their hands.


Autonomy's core technology lets businesses index and search data that conventional, text-based search engines struggle with. Examples are audio recordings of sales calls, or video from surveillance cameras. Aurasma's closest competitor is Layar, a Netherlands company that offers an AR platform that others can add content to. However, Layar has so far largely relied on GPS location to position content, and only recently made it possible to position virtual objects more precisely, using image recognition. And Layar does not recognize users' gestures. Although mobile phones and tablets are the best interfaces available for AR today, the experience is still somewhat clunky, since a person must hold up a device with one hand at all times. Sci-fi writers and technologists have long forecast that the technology would eventually be delivered through glasses. Recognizing hand movements would be useful for such a design, since there wouldn't be the option of using a touch screen or physical buttons.

More information:

http://www.technologyreview.com/communications/38568/