Navigation

Research Projects

Current Projects

  • Improving multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information
    (Non-FAU Project)
    Term: April 1, 2019 - April 30, 2022
    This project aims to improve multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information. Such improvements include noise reduction and artifact removal from data acquired in SPECT.
  • Deep Learning based Noise Reduction for Hearing Aids
    (Third Party Funds Single)
    Term: February 1, 2019 - January 31, 2022
    Funding source: Industrie

     

    Reduction of unwanted environmental noises is an
    important feature of today’s hearing aids, which is why noise reduction
    is nowadays included in almost every commercially available device. The
    majority of these algorithms, however, is restricted to the reduction of
    stationary noises. Due to the large number of different background
    noises in daily situations, it is hard to heuristically cover the
    complete solution space of noise reduction schemes. Deep learning-based
    algorithms pose a possible solution to this dilemma, however, they
    sometimes lack robustness and applicability in the strict context of
    hearing aids.
    In this project we investigate several deep learning.
    based methods for noise reduction under the constraints of modern
    hearing aids. This involves a low latency processing as well as the
    employing a hearing instrument-grade filter bank. Another important aim
    is the robustness of the developed methods. Therefore, the methods will
    be applied to real-world noise signals recorded with hearing
    instruments.

  • Digitalization in clinical settings using graph databases
    (Non-FAU Project)
    Term: October 1, 2018 - October 1, 2021
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (seit 2018)
    In clinical settings, different data is stored in different
    systems. These data are very heterogeneous, but still
    highly interconnected. Graph databases are a good fit for
    this kind of data: they contain heterogeneous "data nodes"
    which can be connected to each other. The basic question is
    now if and how clinical data can be used in a graph
    database, most importantly how clinical staff can profit
    from this approach. Possible scenarios are a graphical user
    interface for clinical staff for easier access to required
    information or an interface for evaluation and analysis to
    answer more complex questions. (e.g., "Were there similar
    patients to this patient? How were they treated?")
  • Modelling the progression of neurological diseases
    (Third Party Funds Group – Sub project)
    Overall project: Training Network on Automatic Processing of PAthological Speech
    Term: May 1, 2018 - May 1, 2021
    Funding source: Innovative Training Networks (ITN)
    Develop speech technology that can allow unobtrusive monitoring of many
    kinds of neurological diseases. The state of a patient can degrade
    slowly between medical check-ups. We want to track the state of a
    patient unobtrusively without the feeling of constant supervision. At
    the same time the privacy of the patient has to be respected. We will
    concentrate on PD and thus on acoustic cues of changes. The algorithms
    should run on a smartphone, track acoustic changes during regular phone
    conversations over time and thus have to be low-resource. No speech
    recognition will be used and only some analysis parameters of the
    conversation are stored on the phone and transferred to the server.
  • Deep Learning Applied to Animal Linguistics
    (FAU Funds)
    Term: April 1, 2018 - April 1, 2022
    Deep Learning Applied to Animal Linguistics in particular the analysis of underwater audio recordings of marine animals (killer whales):

    For marine biologists, the interpretation and understanding of underwater audio recordings is essential. Based on such recordings, possible conclusions about behaviour, communication and social interactions of marine animals can be made. Despite a large number of biological studies on the subject of orca vocalizations, it is still difficult to recognize a structure or semantic/syntactic significance of orca signals in order to be able to derive any language and/or behavioral patterns. Due to a lack of techniques and computational tools, hundreds of hours of underwater recordings are still manually verified by marine biologists in order to detect potential orca vocalizations. In a post process these identified orca signals are analyzed and categorized. One of the main goals is to provide a robust and automatic method which is able to automatically detect orca calls within underwater audio recordings. A robust detection of orca signals is the baseline for any further and deeper analysis. Call type identification and classification based on pre-segmented signals can be used in order to derive semantic and syntactic patterns. In connection with the associated situational video recordings and behaviour descriptions (provided by several researchers on site) can provide potential information about communication (kind of a language model) and behaviors (e.g. hunting, socializing). Furthermore, orca signal detection can be used in conjunction with a localization software in order to provide researchers on the field with a more efficient way of searching the animals as well as individual recognition.

    For more information about the DeepAL project please contact christian.bergler@fau.de.

  • Development of multi-modal, multi-scale imaging framework for the early diagnosis of breast cancer
    (FAU Funds)
    Term: March 1, 2017 - June 30, 2020
    Breast cancer is the leading cause of cancer related deaths in women, the second most common cancer worldwide. The development and progression of breast cancer is a dynamic biological and evolutionary process. It involves a composite organ system, with transcriptome shaped by gene aberrations, epigenetic changes, the cellular biological context, and environmental influences. Breast cancer growth and response to treatment has a number of characteristics that are specific to the individual patient, for example the response of the immune system and the interaction with the neighboring tissue. The overall complexity of breast cancer is the main cause for the current, unsatisfying understanding of its development and the patient’s therapy response. Although recent precision medicine approaches, including genomic characterization and immunotherapies, have shown clear improvements with regard to prognosis, the right treatment of this disease remains a serious challenge. The vision of the BIG-THERA team is to improve individualized breast cancer diagnostics and therapy, with the ultimate goal of extending the life expectancy of breast cancer sufferers. Our primary contribution in this regard is developing a multi-modal, multi-scale framework for the early diagnosis of the molecular sub-types of breast cancer, in a manner that supplements the clinical diagnostic workflow and enables the early identification of patients compatible with specific immunotherapeutic solutions.

Former Projects from 2017 on

  • Development of a digital therapy tool as an exercise supplement for speech disorders and facial paralysis
    (Third Party Funds Single)
    Term: June 1, 2017 - December 31, 2019
    Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)
    Dysarthrien sind neurologisch bedingte, erworbene Störungen des Sprechens. Dabei sind vor allem die Koordination und Ausführung der Sprechbewegungen, aber auch die Mimik betroffen. Besonders häufig tritt eine Dysarthrie nach einem Schlaganfall, Schädel-Hirn-Trauma oder bei neurologischen Erkrankungen wie Parkinson auf.Ähnlich wie in allen Sprechtherapien erfordert auch die Behandlung der Dysarthrie ein intensives Training. Anhaltende Effekte der Dysarthrie-Therapie stellen sich deshalb nur nach einem umfangreichen Behandlungsprogramm über mehrere Wochen hinweg ein. Bisher gibt es jedoch kaum Möglichkeiten zur Selbstkontrolle für Patienten noch therapeutische Anleitung in einem häuslichen Umfeld. Auch die Rückmeldung an Ärzte / Therapeuten über den Therapieerfolg ist eher lückenhaft.Das Projekt DysarTrain setzt genau hier an und will ein interaktives, digitales Therapieangebot für das Sprechtraining schaffen, damit Patienten ihre Übungen im häuslichen Umfeld durchführen können. In enger Abstimmung mit Ärzten, Therapeuten und Patienten werden zuerst die passenden Therapieinhalte zur Behandlung von Dysarthrien ausgewählt und digitalisiert. In einem zweiten Schritt wird eine Therapieplattform mit den geeigneten Kommunikations-, Interaktions- und Supervisionsfunktionen aufgebaut. Für die Durchführung des Trainings werden anschließend Assistenzfunktionen und Feedbackmechanismen entwickelt. Das Programm soll automatisch rückmelden, ob eine Übung gut absolviert wurde und was ggf. noch verbessert werden kann. Eine automatisierte Auswertung der Therapiedaten erlaubt es Ärzten und Therapeuten, die Therapieform auf möglichst einfache Weise zu individualisieren und an den jeweiligen Therapiestand anzupassen. Dieses Angebot wird mit Ärzten, Therapeuten und Patienten in den Behandlungsprozess integriert und evaluiert.
  • Digital Pathology - New Approaches to the Automated Image Analysis of Histologic Slides
    (Own Funds)
    Term: January 16, 2017 - January 16, 2020
    The pathologist is still the gold standard in the diagnosis of diseases in tissue slides. Due to its human nature, the pathologist is on one side able to flexibly adapt to the high morphological and technical variability of histologic slides but of limited objectivity due to cognitive and visual traps.In diverse project we are applying and validating currently available tools and solutions in digital pathology but are also developing new solution in automated image analysis to complement and improve the pathologist especially in areas of quantitative image analysis.
  • Deep Learning for Multimodel Cardiac MR Image Analysis and Quantification
    (Third Party Funds Single)
    Term: January 1, 2017 - May 1, 2020
    Funding source: Deutscher Akademischer Austauschdienst (DAAD)
    Cardiovascular diseases (CVDs) and other cardiac pathologies are the leading cause of death in Europe and the USA. Timely diagnosis and post-treatment follow-ups are imperative for improving survival rates and delivering high-quality patient care. These steps rely heavily on numerous cardiac imaging modalities, which include CT (computerized tomography), coronary angiography and cardiac MRI. Cardiac MRI is a non-invasive imaging modality used to detect and monitor cardiovascular diseases. Consequently, quantitative assessment and analysis of cardiac images is vital for diagnosis and devising suitable treatments. The reliability of quantitative metrics that characterize cardiac functions such as, myocardial deformation and ventricular ejection fraction, depends heavily on the precision of the heart chamber segmentation and quantification. In this project, we aim to investigate deep learning methods to improve the diagnosis and prognosis for CVDs,