Journal of the Acoustical Society of America
Whale sounds, Underwater acoustics--Measurement--Data processing
A time-frequency contour extraction and classiﬁcation algorithm was created to analyze humpback whale vocalizations. The algorithm automatically extracted contours of whale vocalization units by searching for gray-level discontinuities in the spectrogram images. The unit-to-unit similarity was quantiﬁed by cross-correlating the contour lines. A library of distinctive humpback units was then generated by applying an unsupervised, cluster-based learning algorithm. The purpose of this study was to provide a fast and automated feature selection tool to describe the vocal signatures of animal groups. This approach could beneﬁt a variety of applications such as species description, identiﬁcation, and evolution of song structures. The algorithm was tested on humpback whale song data recorded at various locations in Hawaii from 2002 to 2003. Results presented in this paper showed low probability of false alarm (0%–4%) under noisy environments with small boat vessels and snapping shrimp. The classiﬁcation algorithm was tested on a controlled set of 30 units forming six unit types, and all the units were correctly classiﬁed. In a case study on humpback data collected in the Auau Chanel, Hawaii, in 2002, the algorithm extracted 951 units, which were classiﬁed into 12 distinctive types.
Ou, H., Au, W. W., Zurk, L. M., & Lammers, M. O. (2013). Automated extraction and classification of time-frequency contours in humpback vocalizations. The Journal of the Acoustical Society of America, 133(1), 301-310.