The ability to capture the behavior of animals is critical for neuroscience, ecology, and many other fields. Cameras are ideal for capturing fine-grained behavior but developing computer vision techniques to extract the animal’s behavior is challenging even though this seems effortless for our own visual system. One of the key aspects of quantifying animal behavior is pose estimation. In a lab setting, it’s possible to assist pose estimation by placing markers on the animal’s body like in motion-capture techniques used in movies. But as one can imagine, getting animals to wear specialized equipment is not the easiest task, and downright impossible and unethical in the wild.
For this reason, researchers at EPFL have been pioneering markerless tracking for animals. Their software relies on deep learning to teach computers to perform pose estimation without the need for physical or virtual markers. Their teams have been developing DeepLabCut, an open-source, deep-learning “animal pose estimation package” that can perform markerless motion capture of animals. In 2018 they released DeepLabCut, and the software has gained significant traction in life sciences: over 350,00 downloads of the software and nearly 1400 citations. In 2020, the Mathis teams released DeepLabCut-Live!, allows researchers to rapidly give feedback to animals they are studying.
More information:
https://actu.epfl.ch/news/time-to-get-social-tracking-animals-with-deep-lear/