25 September 2022

Quest Headset Supports Full Body Tracking

Meta researchers are transferring the principle of hand tracking, to the whole body. QuestSim can realistically animate a full-body avatar using only sensor data from the headset and the two controllers. The Meta team trained the QuestSim AI with artificially generated sensor data. For this, the researchers simulated the movements of the headset and controllers based on eight hours of motion-capture clips of 172 people. This way, they did not have to capture the headset and controller data with the body movements from scratch. The training data for the QuestSim AI was artificially generated in a simulation. The motion-capture clips included 130 minutes of walking, 110 minutes of jogging, 80 minutes of casual conversation with gestures, 90 minutes of whiteboard discussion, and 70 minutes of balancing. Simulation training of the avatars with reinforcement learning lasted about two days.

After training, QuestSim can recognize which movement a person is performing based on real headset and controller data. Using AI prediction, QuestSim can even simulate movements of body parts such as the legs for which no real-time sensor data is available, but for which simulated movements were part of the synthetic motion capture dataset, i.e. learned by the AI. For plausible movements, the avatar is also subject to the rules of a physics simulator. QuestSim works for people of different sizes. However, if the avatar differs from the proportions of the real person, it affects the avatar animation. For example, a tall avatar for a short person walk hunched over. The researchers still see potential for optimization here. Meta’s research team also shows that the headset’s sensor data alone, with AI prediction, is sufficient for a believable and physically correct animated full-body avatar.

More information:

https://mixed-news.com/en/meta-shows-stunning-full-body-tracking-only-via-quest-headset/