vmTracking: Virtual Markers Overcome Occlusion and Crowding in Multi-Animal Pose Tracking

Avatar
Poster
Voice is AI-generated
Connected to paperThis paper is a preprint and has not been certified by peer review

vmTracking: Virtual Markers Overcome Occlusion and Crowding in Multi-Animal Pose Tracking

Authors

Azechi, H.; Takahashi, S.

Abstract

Overcoming occlusion and crowding in multi-animal tracking remains challenging. Thus, we aim to introduce virtual marker tracking (vmTracking) as a solution to these problems. This method integrates markerless multi-animal pose estimation, including multi-animal DeepLabCut (maDLC) or Social LEAP Estimate Animal Poses (SLEAP), with single-animal tracking techniques, such as DeepLabCut and LEAP. Initially, maDLC or SLEAP is employed to create videos in which the tracking results are labeled as "virtual markers." Subsequently, these virtual markers are used to ensure consistent animal identification across frames, and the animals are tracked using single-animal tracking. vmTracking significantly reduces the need for manual corrections and the number of annotation frames required for training. It more efficiently improves occlusion and crowding compared with traditional markerless multi-animal tracking, which relies on motion state estimation. The findings of this research have the potential to substantially enhance the precision and dependability of tracking methods used in the analysis of intricate naturalistic and social behaviors in animals.

Follow Us on

0 comments

Add comment