On the 5th of September 2018, I presented a co-authored paper at the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games 2018), which was held in Würzburg, Germany. The paper was entitled "A Model for Eye and Head Motion for Virtual Agents" and proposed a model for generating head and eye movements during gaze shifts of virtual characters, including eyelid and eyebrow motion.
A user study with 30 participants evaluated the communicative accuracy and perceived naturalness of the model. Results showed that the model communicates gaze targets with an accuracy that closely matches that of a human confederate. The implementation can be used as-is in applications where virtual characters act as idle bystanders or observers, or it can be paired with a lip synchronization solution.