Can Self-Supervised Neural Networks Pre-Trained on Human Speech distinguish Animal Callers?

Avatar
Connected to paperThis paper is a preprint and has not been certified by peer review

Can Self-Supervised Neural Networks Pre-Trained on Human Speech distinguish Animal Callers?

Authors

Eklavya Sarkar, Mathew Magimai. -Doss

Abstract

Self-supervised learning (SSL) models use only the intrinsic structure of a given signal, independent of its acoustic domain, to extract essential information from the input to an embedding space. This implies that the utility of such representations is not limited to modeling human speech alone. Building on this understanding, this paper explores the cross-transferability of SSL neural representations learned from human speech to analyze bio-acoustic signals. We conduct a caller discrimination analysis and a caller detection study on Marmoset vocalizations using eleven SSL models pre-trained with various pretext tasks. The results show that the embedding spaces carry meaningful caller information and can successfully distinguish the individual identities of Marmoset callers without fine-tuning. This demonstrates that representations pre-trained on human speech can be effectively applied to the bio-acoustics domain, providing valuable insights for future investigations in this field.

Follow Us on

0 comments

Add comment