Oculus have
released a dedicated Unity plugin which aims to automatically detect and
transform an audio stream from speech into virtual character lip movements
automatically. It seems that the substantial investment in research and
development at Oculus over the last couple of years is leading to some welcome,
if somewhat unexpected developments to come out of their labs.
At Unity’s 2016
Vision VR/Summit, the company have unveiled a lip sync plugin dedicated to
producing lifelike avatar mouth animations, generated from analysing an audio
stream. The new plugin for the Unity engine analyses a canned or live audio
stream, such as microphone captured live voice chat, into potentially realistic
lip animations for an in-world avatar.
More
information: