One of Facebook’s underlying
goals for VR is to use it as a means of connecting distant people. While today
friends can talk and play in a range of social VR applications, including
Facebook Spaces, the representation of users in VR is still a caricature at
best. Recently, Oculus showed work being done on hand-tracking to bring more
intuitive control and accurate avatars into VR. Oculus ‘Half Dome’ prototype is
a headset with a 140 degree field of view and varifocal display. A
computer-vision based hand-tracking system trained with a self-optimizing
machine learning algorithm, achieves tracking that’s far more accurate than any
method before for tracking a single hand, two hands, and hand-object
interactions. Footage which appeared to show the hand-tracking in action also
appeared to show detection of snapping gestures.
The company used a marker-based
tracking system to record hand interactions in high fidelity, and then
condensed the recorded data into 2D imagery which allowed them to set a
convolutional neural network to the task of uniquely identifying the positions
of the markers across a large set of hand pose imagery, effectively allowing
the system to learn what a hand should look like given an arbitrary set of
marker positions. Ostensibly, this trained system can then be fed markerless
camera input of a user’s hands and solve for their position. By measure of
something Oculus labeled ‘Tracking Success Rate’ (the company claims to have
achieved a rather astounding 100% success rate with single hand-tracking). They
claim even bigger leaps compared to other methods for two-handed and
hand-object interactions.
More information: