Deep Learning Applied to Animal Linguistics
Deep Learning Applied to Animal Linguistics
(FAU Funds)
Overall project:
Project leader: ,
Project members: , , ,
Start date: April 1, 2018
End date: April 1, 2022
Acronym: DeepAL
Funding source:
URL:
Abstract
Deep Learning Applied to Animal Linguistics in particular the analysis of underwater audio recordings of marine animals (killer whales):
For marine biologists, the interpretation and understanding of underwater audio recordings is essential. Based on such recordings, possible conclusions about behaviour, communication and social interactions of marine animals can be made. Despite a large number of biological studies on the subject of orca vocalizations, it is still difficult to recognize a structure or semantic/syntactic significance of orca signals in order to be able to derive any language and/or behavioral patterns. Due to a lack of techniques and computational tools, hundreds of hours of underwater recordings are still manually verified by marine biologists in order to detect potential orca vocalizations. In a post process these identified orca signals are analyzed and categorized. One of the main goals is to provide a robust and automatic method which is able to automatically detect orca calls within underwater audio recordings. A robust detection of orca signals is the baseline for any further and deeper analysis. Call type identification and classification based on pre-segmented signals can be used in order to derive semantic and syntactic patterns. In connection with the associated situational video recordings and behaviour descriptions (provided by several researchers on site) can provide potential information about communication (kind of a language model) and behaviors (e.g. hunting, socializing). Furthermore, orca signal detection can be used in conjunction with a localization software in order to provide researchers on the field with a more efficient way of searching the animals as well as individual recognition.
For more information about the DeepAL project please contact christian.bergler@fau.de.
Publications
ORCA-SPOT: An Automatic Killer Whale Sound Detection Toolkit Using Deep Learning
In: Scientific Reports 9 (2019), p. 1-17
ISSN: 2045-2322
DOI: 10.1038/s41598-019-47335-w
BibTeX: Download , , , , , , , :
Deep Learning for Orca Call Type Identification – A Fully Unsupervised Approach
20th Annual Conference of the International Speech Communication Association: Crossroads of Speech and Language, INTERSPEECH 2019 (Graz, September 15, 2019 - September 19, 2019)
In: Gernot Kubin, Thomas Hain, Bjorn Schuller, Dina El Zarka, Petra Hodl (ed.): Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2019
DOI: 10.21437/Interspeech.2019-1857
BibTeX: Download , , , , , :
Deep Representation Learning for Orca Call Type Classification
22nd International Conference on Text, Speech, and Dialogue, TSD 2019 (Ljubljana, September 11, 2019 - September 13, 2019)
In: Kamil Ekštein (ed.): Text, Speech, and Dialogue, 22nd International Conference, TSD 2019, Ljubljana, Slovenia, September 11–13, 2019, Proceedings 2019
DOI: 10.1007/978-3-030-27947-9_23
BibTeX: Download , , , , , , , :
Segmentation, Classification, and Visualization of Orca Calls Using Deep Learning
International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (Brighton, May 12, 2019 - May 17, 2019)
In: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/ICASSP.2019.8683785
URL: https://ieeexplore.ieee.org/abstract/document/8683785
BibTeX: Download , , , , , :
ORCA-CLEAN: A Deep Denoising Toolkit for Killer Whale Communication
21th Annual Conference of the International Speech Communication Association: Cognitive Intelligence for Speech Processing, INTERSPEECH 2020 (Shanghai, China, October 25, 2020 - October 29, 2020)
In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2020 2020
DOI: 10.21437/Interspeech.2020-1316
BibTeX: Download , , , , , :