Navigation

Hendrik Schröter, M. Sc.

Academic career:
  • Since 02/2019:
    Ph.D. researcher in the speech processing group at the pattern recognition lab with a research focus on signal processing and speech enhancement.
  • 04/2016 – 12/2018:
    M.Sc. student (computer science) at Friedrich-Alexander-Universität Erlangen-Nürnberg, with focus on pattern recognition.
  • 10/2011 – 09/2014:
    B.Eng. student (mechatronic engineering) at Duale Hochschule Mannheim, working student at Schenck Process GmbH.

Professional career:

  • 10/2014 – 04/2016:
    Software developer at Schenck Process GmbH, Darmstadt, Germany.
    A non-exhaustive list of tasks included data base development, processing of measurement data, designing and implementation of custom interfaces to customer data stores as well as UI development.

2019

  • Deep Learning based Noise Reduction for Hearing Aids

    (Third Party Funds Single)

    Term: February 1, 2019 - January 31, 2022
    Funding source: Industrie
     

    Reduction of unwanted environmental noises is an important feature of today’s hearing aids, which is why noise reduction is nowadays included in almost every commercially available device. The majority of these algorithms, however, is restricted to the reduction of stationary noises. Due to the large number of different background noises in daily situations, it is hard to heuristically cover the complete solution space of noise reduction schemes. Deep learning-based algorithms pose a possible solution to this dilemma, however, they sometimes lack robustness and applicability in the strict context of hearing aids.
    In this project we investigate several deep learning.based methods for noise reduction under the constraints of modern hearing aids. This involves a low latency processing as well as the employing a hearing instrument-grade filter bank. Another important aim is the robustness of the developed methods. Therefore, the methods will be applied to real-world noise signals recorded with hearing instruments.

2018

  • Deep Learning Applied to Animal Linguistics

    (FAU Funds)

    Term: April 1, 2018 - April 1, 2022
    Deep Learning Applied to Animal Linguistics in particular the analysis of underwater audio recordings of marine animals (killer whales):

    For marine biologists, the interpretation and understanding of underwater audio recordings is essential. Based on such recordings, possible conclusions about behaviour, communication and social interactions of marine animals can be made. Despite a large number of biological studies on the subject of orca vocalizations, it is still difficult to recognize a structure or semantic/syntactic significance of orca signals in order to be able to derive any language and/or behavioral patterns. Due to a lack of techniques and computational tools, hundreds of hours of underwater recordings are still manually verified by marine biologists in order to detect potential orca vocalizations. In a post process these identified orca signals are analyzed and categorized. One of the main goals is to provide a robust and automatic method which is able to automatically detect orca calls within underwater audio recordings. A robust detection of orca signals is the baseline for any further and deeper analysis. Call type identification and classification based on pre-segmented signals can be used in order to derive semantic and syntactic patterns. In connection with the associated situational video recordings and behaviour descriptions (provided by several researchers on site) can provide potential information about communication (kind of a language model) and behaviors (e.g. hunting, socializing). Furthermore, orca signal detection can be used in conjunction with a localization software in order to provide researchers on the field with a more efficient way of searching the animals as well as individual recognition.

    For more information about the DeepAL project please contact christian.bergler@fau.de.

2020

Conference Contributions

2019

Journal Articles

Conference Contributions