Navigation

Prof. Dr.-Ing. habil. Andreas Maier

Researcher

Department of Computer Science
Chair of Computer Science 5 (Pattern Recognition)

Room: Room 09.138
Martensstraße 3
91058 Erlangen
Germany

Appointments

nach Vereinbarung/by agreement

https://medium.com/@akmaier
ORCID iD iconhttps://orcid.org/0000-0002-9550-5284

Prof. Dr. Andreas Maier was born on 26th of November 1980 in Erlangen. He studied Computer Science, graduated in 2005, and received his PhD in 2009. From 2005 to 2009 he was working at the Pattern Recognition Lab at the Computer Science Department of the University of Erlangen-Nuremberg. His major research subject was medical signal processing in speech data. In this period, he developed the first online speech intelligibility assessment tool – PEAKS – that has been used to analyze over 4.000 patient and control subjects so far.

From 2009 to 2010, he started working on flat-panel C-arm CT as post-doctoral fellow at the Radiological Sciences Laboratory in the Department of Radiology at the Stanford University. From 2011 to 2012 he joined Siemens Healthcare as innovation project manager and was responsible for reconstruction topics in the Angiography and X-ray business unit.

In 2012, he returned the University of Erlangen-Nuremberg as head of the Medical Reconstruction Group at the Pattern Recognition lab. In 2015 he became professor and head of the Pattern Recognition Lab. Since 2016, he is member of the steering committee of the European Time Machine Consortium. In 2018, he was awarded an ERC Synergy Grant “4D nanoscope”.  Current research interests focuses on medical imaging, image and audio processing, digital humanities, and interpretable machine learning and the use of known operators.

2021

  • ODEUROPA: Negotiating Olfactory and Sensory Experiences in Cultural Heritage Practice and Research

    (Third Party Funds Group – Sub project)

    Overall project: ODEUROPA
    Term: January 1, 2021 - December 31, 2022
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
    URL: https://odeuropa.eu/

    Our senses are gateways to the past. Although museums are slowly discovering the power of multi-sensory presentations, we lack the scientific standards, tools and data to identify, consolidate, and promote the wide-ranging role of scents and smelling in our cultural heritage. In recent years, European cultural heritage institutions have invested heavily in large-scale digitization. A wealth of object, text and image data that can be analysed using computer science techniques now exists. However, the potential olfactory descriptions, experiences, and memories that they contain remain unexplored. We recognize this as both a challenge and an opportunity. Odeuropa will apply state-of-the-art AI techniques to text and image datasets that span four centuries of European history. It will identify the vocabularies, spaces, events, practices, and emotions associated with smells and smelling. The project will curate this multi-modal information, following semantic web standards, and store the enriched data in a ‘European Olfactory Knowledge Graph’ (EOKG). We will use this data to identify ‘storylines’, informed by cultural history and heritage research, and share these with different audiences in different formats: through demonstrators, an online catalogue, toolkits and training documentation describing best-practices in olfactory museology. New, evidence-based methodologies will quantify the impact of multisensory visitor engagement. This data will support the implementation of policy recommendations for recognising, promoting, presenting and digitally preserving olfactory heritage. These activities will realize Odeuropa’s main goal: to show that smells and smelling are important and viable means for consolidating and promoting Europe’s tangible and intangible cultural heritage.

  • UtilityTwin

    (Third Party Funds Group – Overall project)

    Term: September 1, 2021 - August 31, 2024
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)

    In the UtilityTwin research project, an intelligent digital twin for any energy or water supply network is to be researched and developed on the basis of adaptive high-resolution sensor data (down to the sub-second range) and machine learning techniques. Overall, the technology concepts BigData and AI are to be combined in an innovative way in this research project in order to make positive contributions to the implementation of the energy transition and to counteract climate change.

2020

  • Bereitstellung einer Infrastruktur zur Nutzung für die Ausbildung Studierender auf einem z/OS Betriebssystem der Fa. IBM

    (FAU Funds)

    Term: April 2, 2020 - March 31, 2025
    Funding source: Friedrich-Alexander-Universität Erlangen-Nürnberg
  • CT-Belichtungsautomatik mit patientenspezifischer Echtzeitberechnung der Dosisverteilung durch neuronale Netze und Minimierung der effektiven Dosis

    (Third Party Funds Single)

    Term: April 1, 2020 - March 31, 2023
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Entwicklung eines Leitfadens zur dreidimensionalen zerstörungsfreien Erfassung von Manuskripten

    (Third Party Funds Single)

    Term: May 1, 2020 - April 30, 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    URL: https://gepris.dfg.de/gepris/projekt/433501541?context=projekt&task=showDetail&id=433501541&
  • Förderantrag zur Entwicklung des Kurses „Deep Learning for beginners“

    (Third Party Funds Single)

    Term: September 1, 2020 - August 31, 2021
    Funding source: Virtuelle Hochschule Bayern
  • Integratives Konzept zur personalisierten Präzisionsmedizin in Prävention, Früh-Erkennung, Therapie und Rückfallvermeidung am Beispiel von Brustkrebs

    (Third Party Funds Single)

    Term: October 1, 2020 - September 30, 2024
    Funding source: Bayerisches Staatsministerium für Gesundheit und Pflege, StMGP (seit 2018)

    Breast cancer is one of the leading causes of death in the field of oncology in Germany. For the successful care and treatment of patients with breast cancer, a high level of information for those affected is essential in order to achieve a high level of compliance with the established structures and therapies. On the one hand, the digitalisation of medicine offers the opportunity to develop new technologies that increase the efficiency of medical care. On the other hand, it can also strengthen patient compliance by improving information and patient integration through electronic health applications. Thus, a reduction in mortality and an improvement in quality of life can be achieved. Within the framework of this project, digital health programmes are going to be created that support and complement health care. The project aims to provide better and faster access to new diagnostic and therapeutic procedures in mainstream oncology care, to implement eHealth models for more efficient and effective cancer care, and to improve capacity for patients in oncologcal therapy in times of crisis (such as the SARS-CoV-2 pandemic). The Chair of Health Management is conducting the health economic evaluation and analysing the extent to which digitalisation can contribute to a reduction in the costs of treatment and care as well as to an improvement in the quality of life of breast cancer patients.

  • Integratives Konzept zur personalisierten Präzisionsmedizin in Prävention, Früh-Erkennung, Therapie undRückfallvermeidung am Beispiel von Brustkrebs - DigiOnko

    (Third Party Funds Single)

    Term: October 1, 2020 - September 30, 2024
    Funding source: Bayerisches Staatsministerium für Gesundheit und Pflege, StMGP (seit 2018)
  • Intelligente MR-Diagnostik der Leber durch Verknüpfung modell- und datengetriebener Verfahren

    (Third Party Funds Single)

    Term: April 1, 2020 - March 31, 2023
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
  • Intelligent MR Diagnosis of the Liver by Linking Model and Data-driven Processes (iDELIVER)

    (Third Party Funds Single)

    Term: August 3, 2020 - March 31, 2023
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    The project examines the use and further development of machine learning methods for MR image reconstruction and for the classification of liver lesions. Based on a comparison model and data-driven image reconstruction methods, these are to be systematically linked in order to enable high acceleration without sacrificing diagnostic value. In addition to the design of suitable networks, research should also be carried out to determine whether metadata (e.g. age of the patient) can be incorporated into the reconstruction. Furthermore, suitable classification algorithms on an image basis are to be developed and the potential of direct classification on the raw data is to be explored. In the long term, intelligent MR diagnostics can significantly increase the efficiency of use of MR hardware, guarantee better patient care and set new impulses in medical technology.

  • Molecular Assessment of Signatures ChAracterizing the Remission of Arthritis

    (Third Party Funds Single)

    Term: April 1, 2020 - September 30, 2022
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    MASCARA zielt auf eine detaillierte, molekulare Charakterisierung der Remission bei Arthritis ab. Das Projekt basiert auf der kombinierten klinischen und technischen Erfahrung von Rheumatologen, Radiologen, Medizinphysikern, Nuklearmedizinern, Gastroenterologen, grundlagenwissenschaftlichen Biologen und Informatikern und verbindet fünf akademische Fachzentren in Deutschland. Das Projekt adressiert 1) den Umstand der zunehmenden Zahl von Arthritis Patienten in Remission, 2) die Herausforderungen, eine effektive Unterdrückung der Entzündung von einer Heilung zu unterscheiden und 3) das begrenzte Wissen über die Gewebeveränderungen in den Gelenken von Patienten mit Arthritis. MASCARA wird auf der Grundlage vorläufiger Daten vier wichtige mechanistische Bereiche (immunstoffwechselbedingte Veränderungen, mesenchymale Gewebereaktionen, residente Immunzellen und Schutzfunktion des Darms) untersuchen, die gemeinsam den molekularen Zustand der Remission bestimmen. Das Projekt zielt auf die Sammlung von Synovialbiopsien und die anschließende Gewebeanalyse bei Patienten mit aktiver Arthritis und Patienten in Remission ab. Die Gewebeanalysen umfassen (Einzelzell)-mRNA-Sequenzierung, Massenzytometrie sowie die Messung von Immunmetaboliten und werden durch molekulare Bildgebungsverfahren wie CEST-MRT und FAPI-PET ergänzt. Sämtliche Daten, die in dem Vorhaben generiert werden, werden in einem bereits bestehenden Datenbanksystem mit den Daten der anderen Partner zusammengeführt und gespeichert. Das Zusammenführen der Daten soll – mit Hilfe von maschinellem Lernen – krankheitsspezifische und mit der Krankheitsaktivität verbundene Mustermatrizen identifizieren.

  • Verbesserte Dual Energy Bildgebung mittels Maschinellem Lernen

    (Third Party Funds Single)

    Term: April 1, 2020 - December 31, 2020
    Funding source: Europäische Union (EU)

    The project aims to develop novel and innovative methods to improve visualisation and use of dual energy CT (DECT) images. Compared to conventional single energy CT (SECT) scans, DECT contains a significant amount of additional quantitative information that enables tissue characterization far beyond what is possible with SECT, including material decomposition for quantification and labelling of specific materials within tissues, creation of reconstructions at different predicted energy levels, and quantitative spectral tissue characterization for tissue analysis. However, despite the many potential advantages of DECT, applications remain limited and in specizlized clinical settings. Some reasons are that many applications are specific for the organ under investigation, require additional, manual processing or calibration, and not easily manipulated using standard interactive contrast visualisation windows available in clinical viewing stations. This is a significant disadvantage compared to conventional SECT.
    In this project, we propose to develop new strategies to fuse and display the additional DECT information on a single contrast scale such that it can be visualised with the same interactive tools that radiologists are used to in their clinical routine. We will investigate non-linear manifold learning techniques like Laplacian Eigenmaps and the Sammon Mapping. Both allow extension using AI-based techniques like the newly developed user loss that allows to integrate user's opinions using forced choice experiments. This will allow a novel image contrast that will be compatible with interactive window and level functions that are rourintely used by radiologists. Furthermore, we aim at additional developments that will use deep neural networks to approximate the non-linear mapping function and to generate reconstructions that capture and display tissue specific spectral characteristics in a readily and universally useable manner for enhancing diagnostic value.

2019

  • ICONOGRAPHICS: Computational Understanding of Iconography and Narration in Visual Cultural Heritage

    (FAU Funds)

    Term: April 1, 2019 - March 31, 2021

    The interdisciplinary research project Iconographics is dedicated to innovative possibilities of digital image recognition for the arts and humanities. While computer vision is already often able to identify individual objects or specific artistic styles in images, the project is confronted with the open problem of also opening up the more complex image structures and contexts digitally. On the basis of a close interdisciplinary collaboration between Classical Archaeology, Christian Archaeology, Art History and the Computer Sciences, as well as joint theoretical and methodological reflection, a large number of multi-layered visual works will be analyzed, compared and contextualized. The aim is to make the complex compositional, narrative and semantic structures of these images tangible for computer vision.

    Iconography and Narratology are identified as a challenging research questions for all subjects of the project. The iconography will be interpreted in its plot, temporality, and narrative logic. Due to its complex cultural structure; we selected four important scenes:

    1. The Annunciation of the Lord
    2. The Adoration of the Magi
    3. The Baptism of Christ
    4. Noli me tangere (Do not touch me)
  • Improving multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information

    (Non-FAU Project)

    Term: April 1, 2019 - April 30, 2022

    This project aims to improve multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information. Such improvements include noise reduction and artifact removal from data acquired in SPECT.

  • Advancing osteoporosis medicine by observing bone microstructure and remodelling using a four-dimensional nanoscope

    (Third Party Funds Single)

    Term: April 1, 2019 - March 31, 2025
    Funding source: European Research Council (ERC)
    URL: https://cordis.europa.eu/project/id/810316

    Due to Europe's ageing society, there has been a dramatic increase in the occurrence of osteoporosis (OP) and related diseases. Sufferers have an impaired quality of life, and there is a considerable cost to society associated with the consequent loss of productivity and injuries. The current understanding of this disease needs to be revolutionized, but study has been hampered by a lack of means to properly characterize bone structure, remodeling dynamics and vascular activity. This project, 4D nanoSCOPE, will develop tools and techniques to permit time-resolved imaging and characterization of bone in three spatial dimensions (both in vitro and in vivo), thereby permitting monitoring of bone remodeling and revolutionizing the understanding of bone morphology and its function.

    To advance the field, in vivo high-resolution studies of living bone are essential, but existing techniques are not capable of this. By combining state-of-the art image processing software with innovative 'precision learning' software methods to compensate for artefacts (due e.g. to the subject breathing or twitching), and innovative X-ray microscope hardware which together will greatly speed up image acquisition (aim is a factor of 100), the project will enable in vivo X-ray microscopy studies of small animals (mice) for the first time. The time series of three-dimensional X-ray images will be complemented by correlative microscopy and spectroscopy techniques (with new software) to thoroughly characterize (serial) bone sections ex vivo.

    The resulting three-dimensional datasets combining structure, chemical composition, transport velocities and local strength will be used by the PIs and international collaborators to study the dynamics of bone microstructure. This will be the first time that this has been possible in living creatures, enabling an assessment of the effects on bone of age, hormones, inflammation and treatment.

  • Advancing osteoporosis medicine by observing bone microstructure and remodelling using a fourdimensional nanoscope

    (Third Party Funds Group – Sub project)

    Overall project: Advancing osteoporosis medicine by observing bone microstructure and remodelling using a fourdimensional nanoscope
    Term: April 1, 2019 - March 31, 2025
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
  • Artificial Intelligence for Reinventing European Healthcare

    (Third Party Funds Group – Sub project)

    Overall project: Artificial Intelligence for Reinventing European Healthcare
    Term: January 1, 2019 - December 31, 2019
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
  • Big Data of the Past for the Future of Europe

    (Third Party Funds Group – Sub project)

    Overall project: TIME MACHINE : BIG DATA OF THE PAST FOR THE FUTURE OF EUROPE
    Term: March 1, 2019 - February 29, 2020
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
  • Deep-Learning basierte Segmentierung und Landmarkendetektion auf Röntgenbildern für unfallchirurgische Eingriffe

    (Third Party Funds Single)

    Term: since May 6, 2019
    Funding source: Siemens AG
  • Kombinierte Iterative Rekonstruktion und Bewegungskompensation für die Optische Kohärenz Tomographie-Angiographie

    (Third Party Funds Single)

    Term: June 1, 2019 - May 31, 2021
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Kommunikation und Sprache im Reich. Die Nürnberger Briefbücher im 15. Jahrhunddert: Automatische Handschriftenerkennung - historische und sprachwissenschaftliche Analyse

    (Third Party Funds Single)

    Term: October 1, 2019 - September 30, 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Kommunikation und Sprache im Reich. Die Nürnberger Briefbücher im 15. Jahrhundert: Automatische Handschriftenerkennung - historische und sprachwissenschaftliche Analyse.

    (Third Party Funds Single)

    Term: October 1, 2019 - September 30, 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • PPP Brasilien 2019

    (Third Party Funds Single)

    Term: January 1, 2019 - December 31, 2020
    Funding source: Deutscher Akademischer Austauschdienst (DAAD)
  • Deep Learning based Noise Reduction for Hearing Aids

    (Third Party Funds Single)

    Term: February 1, 2019 - January 31, 2022
    Funding source: Industrie
     

    Reduction of unwanted environmental noises is an important feature of today’s hearing aids, which is why noise reduction is nowadays included in almost every commercially available device. The majority of these algorithms, however, is restricted to the reduction of stationary noises. Due to the large number of different background noises in daily situations, it is hard to heuristically cover the complete solution space of noise reduction schemes. Deep learning-based algorithms pose a possible solution to this dilemma, however, they sometimes lack robustness and applicability in the strict context of hearing aids.
    In this project we investigate several deep learning.based methods for noise reduction under the constraints of modern hearing aids. This involves a low latency processing as well as the employing a hearing instrument-grade filter bank. Another important aim is the robustness of the developed methods. Therefore, the methods will be applied to real-world noise signals recorded with hearing instruments.

  • Tapping the potential of Earth Observations

    (FAU Funds)

    Term: April 1, 2019 - March 31, 2021
  • Tapping the potential of Earth Observations

    (FAU Funds)

    Term: April 1, 2019 - March 31, 2022

    Ziel des Projekts ist es, die Zeitreihen von
    Erdbeobachtungs(EO)-Daten mit innovativen Methoden des „Deep Learnings“
    zu analysieren, um effiziente Algorithmen zur Bewältigung der großen
    Datenmengen zu entwickeln. Der Wert dieser EO-Produkte wird durch
    fortgeschrittene Interpolationstechniken und Assimilation in
    geophysikalische Modelle, die es in der angewandten Mathematik gibt,
    weiter erhöht.

2018

  • Automatic Intraoperative Tracking for Workflow and Dose Monitoring in X-Ray-based Minimally Invasive Surgeries

    (Third Party Funds Single)

    Term: June 1, 2018 - May 31, 2021
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    The goal of this project is the investigation of multimodal methods for the evaluation of interventional workflows in the operation room. This topic will be researched in an international project context with partners in Germany and in Brazil (UNISINOS in Porto Alegre). Methods will be developed to analyze the processes in an OR based on signals from body-worn sensors, cameras and other modalities like X-ray images recorded during the surgeries. For data analysis, techniques from the field of computer vision, machine learning and pattern recognition will be applied. The system will be integrated in a way that body-worn sensors developed by Portabiles as well as angiography systems produced by Siemens Healthcare can be included alongside.

  • Automatisiertes Intraoperatives Tracking zur Ablauf- und Dosisüberwachung in RöntgengestütztenMinimalinvasiven Eingriffen

    (Third Party Funds Group – Sub project)

    Overall project: Automatisiertes Intraoperatives Tracking zur Ablauf- und Dosisüberwachung in RöntgengestütztenMinimalinvasiven Eingriffen
    Term: June 1, 2018 - May 31, 2021
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
  • Deep Learning Applied to Animal Linguistics

    (FAU Funds)

    Term: April 1, 2018 - April 1, 2022
    Deep Learning Applied to Animal Linguistics in particular the analysis of underwater audio recordings of marine animals (killer whales):

    For marine biologists, the interpretation and understanding of underwater audio recordings is essential. Based on such recordings, possible conclusions about behaviour, communication and social interactions of marine animals can be made. Despite a large number of biological studies on the subject of orca vocalizations, it is still difficult to recognize a structure or semantic/syntactic significance of orca signals in order to be able to derive any language and/or behavioral patterns. Due to a lack of techniques and computational tools, hundreds of hours of underwater recordings are still manually verified by marine biologists in order to detect potential orca vocalizations. In a post process these identified orca signals are analyzed and categorized. One of the main goals is to provide a robust and automatic method which is able to automatically detect orca calls within underwater audio recordings. A robust detection of orca signals is the baseline for any further and deeper analysis. Call type identification and classification based on pre-segmented signals can be used in order to derive semantic and syntactic patterns. In connection with the associated situational video recordings and behaviour descriptions (provided by several researchers on site) can provide potential information about communication (kind of a language model) and behaviors (e.g. hunting, socializing). Furthermore, orca signal detection can be used in conjunction with a localization software in order to provide researchers on the field with a more efficient way of searching the animals as well as individual recognition.

    For more information about the DeepAL project please contact christian.bergler@fau.de.

  • Analysis of Defects on Solar Power Cells

    (Third Party Funds Group – Sub project)

    Overall project: iPV 4.0: Intelligente vernetzte Produktion mittels Prozessrückkopplung entlang des Produktlebenszyklus von Solarmodulen
    Term: August 1, 2018 - July 31, 2021
    Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)

    Over the last decade, a large number of solar power plants have been installed in Germany. To ensure a high performance, it is necessary to detect defects early. Therefore, it is required to control the quality of the solar cells during the production process, as well as to monitor the installed modules. Since manual inspections are expensive, a large degree of automation is required.
    This project aims to develop a new approach to automatically detect and classify defects on solar power cells and to estimate their impact on the performance. Further, big data methods will be applied to identify circumstances that increase the probability of a cell to become defect. As a result, it will be possible to reject cells in the production that have a high likelihood to become defect.

  • Digitalization in clinical settings using graph databases

    (Non-FAU Project)

    Term: since October 1, 2018
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)

    In clinical settings, different data is stored in different systems. These data are very heterogeneous, but still highly interconnected. Graph databases are a good fit for this kind of data: they contain heterogeneous "data nodes" which can be connected to each other. The basic question is now if and how clinical data can be used in a graph database, most importantly how clinical staff can profit from this approach. Possible scenarios are a graphical user interface for clinical staff for easier access to required information or an interface for evaluation and analysis to answer more complex questions. (e.g., "Were there similar patients to this patient? How were they treated?")

  • Entwicklung eines Modellrepositoriums und einer Automatischen Schriftarterkennung für OCR-D

    (Third Party Funds Single)

    Term: July 1, 2018 - December 31, 2019
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Critical Catalogue of Luther Portraits (1519-1530)

    (Third Party Funds Group – Overall project)

    Term: June 1, 2018 - May 31, 2021
    Funding source: andere Förderorganisation
    URL: https://www.gnm.de/forschung/projekte/luther-bildnisse/

    Quite a number of portraits of Martin Luther - known as media star of 16th century – can be found in today’s museums and libraries. However, how many portraits indeed exist and which of those are contemporary or actually date from a later period? So far unlike his works, however, the variety of contemporary portraits (painting and print) is neither completely collected nor critically analyzed. Thus, a joint project of the FAU, the Germanisches Nationalmuseum (GNM) in Nuremberg and the Technology Arts Sciences (TH Köln) was initiated. Goal of the interdisciplinary project covering art history, art technology, reformation history and computer science is the creation of a critical catalogue of Luther portraits (1519-1530). In particular, the issues of authenticity, dating of artworks and its historical usage context as well as the potential existence of serial production processes will be investigated.

  • Laboranalyse von Degradationsmechanismen unter beschleunigter Alterung und Entwicklung geeigneter feldtauglicher bildgebender Detektionsverfahren und Entwicklung und Evaluation eines Algorithmus zur Fehlerdetektion und Prognostizierung der Ausfallwahrscheinlichkeit

    (Third Party Funds Group – Overall project)

    Term: August 1, 2018 - July 31, 2021
    Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)
  • Medical Image Processing for Interventional Applications

    (Third Party Funds Single)

    Term: January 1, 2018 - December 31, 2018
    Funding source: Virtuelle Hochschule Bayern
  • Moderner Zugang zu historischen Quellen

    (Third Party Funds Group – Sub project)

    Overall project: Moderner Zugang zu historischen Quellen
    Term: March 1, 2018 - February 28, 2021
    Funding source: andere Förderorganisation
  • Radiologische und Genomische Datenanalyse zur Verbesserung der Brustkrebstherapie

    (Third Party Funds Single)

    Term: January 1, 2018 - December 31, 2019
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
  • Similarity learning for art analysis

    (Third Party Funds Group – Sub project)

    Overall project: Critical Catalogue of Luther Portraits (1519-1530)
    Term: June 1, 2018 - February 28, 2021
    Funding source: andere Förderorganisation
    URL: https://www.gnm.de/forschung/forschungsprojekte/luther-bildnisse/

    The analysis of the similarity of portraits is an important issue for many sciences such as art history or digital humanities, as for instance it might give hints concerning serial production processes, authenticity or temporal and contextual classification of the artworks.
    In the project, first algorithms will be developed for cross-genre and multi-modal registration of portraits to overlay digitized paintings and prints as well as paintings acquired with different imaging systems such as visual light photography and infrared reflectography. Then, methods will be developed to objectively analyze the portraits according to their similarity.
    This project is part of a joint project of the FAU, the Germanisches Nationalmuseum (GNM) in Nuremberg and the Technology Arts Sciences (TH Köln) in Cologne. Goal of the interdisciplinary project covering art history, art technology, reformation history and computer science is the creation of a critical catalogue of Luther portraits (1519-1530).

2017

  • 3-D Multi-Contrast CINE Cardiac Magnetic Resonance Imaging

    (Non-FAU Project)

    Term: October 1, 2017 - September 30, 2020
  • Deep Learning for Multi-modal Cardiac MR Image Analysis and Quantification

    (Third Party Funds Single)

    Term: January 1, 2017 - May 1, 2020
    Funding source: Deutscher Akademischer Austauschdienst (DAAD)

    Cardiovascular diseases (CVDs) and other cardiac pathologies are the leading cause of death in Europe and the USA. Timely diagnosis and post-treatment follow-ups are imperative for improving survival rates and delivering high-quality patient care. These steps rely heavily on numerous cardiac imaging modalities, which include CT (computerized tomography), coronary angiography and cardiac MRI. Cardiac MRI is a non-invasive imaging modality used to detect and monitor cardiovascular diseases. Consequently, quantitative assessment and analysis of cardiac images is vital for diagnosis and devising suitable treatments. The reliability of quantitative metrics that characterize cardiac functions such as, myocardial deformation and ventricular ejection fraction, depends heavily on the precision of the heart chamber segmentation and quantification. In this project, we aim to investigate deep learning methods to improve the diagnosis and prognosis for CVDs,

  • Development of multi-modal, multi-scale imaging framework for the early diagnosis of breast cancer

    (FAU Funds)

    Term: March 1, 2017 - June 30, 2020

    Breast cancer is the leading cause of cancer related deaths in women, the second most common cancer worldwide. The development and progression of breast cancer is a dynamic biological and evolutionary process. It involves a composite organ system, with transcriptome shaped by gene aberrations, epigenetic changes, the cellular biological context, and environmental influences. Breast cancer growth and response to treatment has a number of characteristics that are specific to the individual patient, for example the response of the immune system and the interaction with the neighboring tissue. The overall complexity of breast cancer is the main cause for the current, unsatisfying understanding of its development and the patient’s therapy response. Although recent precision medicine approaches, including genomic characterization and immunotherapies, have shown clear improvements with regard to prognosis, the right treatment of this disease remains a serious challenge. The vision of the BIG-THERA team is to improve individualized breast cancer diagnostics and therapy, with the ultimate goal of extending the life expectancy of breast cancer sufferers. Our primary contribution in this regard is developing a multi-modal, multi-scale framework for the early diagnosis of the molecular sub-types of breast cancer, in a manner that supplements the clinical diagnostic workflow and enables the early identification of patients compatible with specific immunotherapeutic solutions.

  • Digital Pathology - New Approaches to the Automated Image Analysis of Histologic Slides

    (Own Funds)

    Term: since January 16, 2017

    The pathologist is still the gold standard in the diagnosis of diseases in tissue slides. Due to its human nature, the pathologist is on one side able to flexibly adapt to the high morphological and technical variability of histologic slides but of limited objectivity due to cognitive and visual traps.

    In diverse project we are applying and validating currently available tools and solutions in digital pathology but are also developing new solution in automated image analysis to complement and improve the pathologist especially in areas of quantitative image analysis.

  • Integrative 'Big Data Modeling' for the development of novel therapeutic approaches for breast cancer

    (FAU Funds)

    Term: January 1, 2017 - December 31, 2019

    Brustkrebs ist die häufigste Ursache für den Krebstod bei Frauen, die zweithäufigste Krebsart weltweit und die fünfthäufigste Ursache für krebsbedingte Todesfälle. Die Entwicklung und der Fortschritt von Brustkrebs ist ein dynamischer biologischer und evolutionärer Prozess. Es handelt sich um ein komplexes Organsystem, dessen Transkriptom durch Genaberrationen, epigenetische Veränderungen, dem zellulären biologischen Kontext und Umwelteinflüsse geprägt ist. Das Wachstum von Brustkrebs und die Reaktion auf die Behandlung hat eine Reihe von Eigenschaften, die für den einzelnen Patienten spezifisch sind, zum Beispiel die Reaktion des Immunsystems und die Interaktion mit dem benachbarten Gewebe. Die Gesamtkomplexität des Brustkrebses ist die Hauptursache für das aktuelle, unbefriedigende Verständnis seiner Entwicklung und des Therapieansprechens des Patienten. Obwohl die jüngsten präzisionsmedizinischen Ansätze, einschließlich genomischer Charakterisierung und Immuntherapien, deutliche Verbesserungen in der Prognose gezeigt haben, bleibt die richtige Behandlung dieser Krankheit eine große Herausforderung. Die Vision des BIG-THERA-Teams ist es, die individuelle Diagnose und Therapie von Brustkrebs zu verbessern, mit dem Ziel, die Lebenserwartung dieser Patienten zu verlängern.

    Folgende Ziele hat sich das BIG-THERA-Team gesetzt:

    • die Verfahren zur nicht-invasiven Früherkennung und Therapieverfolgung auf der Grundlage der Magnetresonanztomographie (MRT) zu verbessern
    • das Zusammenspiel zwischen dem Immunsystem und dem Krebswachstum zur Trennung immunologisch unterschiedlicher Brustkrebs-Subtypen für das Immuntherapie-Design aufzuklären
    • neue Strategien für die Immunphänotypisierung von Tumoren mit nanomedizinischen Techniken zu entwickeln
    • ethische Herausforderungen im Zusammenhang mit den neuen Fortschritten in der Brustkrebsforschung zu lösen
    • therapeutische Entscheidungen unter Verwendung von Big-Data Datensätzen und Informationen aus in vitro, in vivo und in silico OMICs Studien, Bildgebung und Modellierung basieren, zu optimieren.
  • Joint Iterative Reconstruction and Motion Compensation for Optical Coherence Tomography
    Angiography

    (Third Party Funds Single)

    Term: since July 24, 2017
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Optical coherence tomography (OCT) is a non-invasive 3-D optical imagingmodality that is a standard of care in ophthalmology [1,2]. Since the introduction of Fourier-domain OCT [3], dramatic increases in imaging speedbecame possible, enabling 3-D volumetric data to be acquired. Typically, aregion of the retina is scanned line by line, where each scanned lineacquires a cross-sectional image or a B-scan. Since B-scans are acquiredin milliseconds, slices extracted along a scan line, or the fast scanaxis, are barely affected by motion. In contrast, slices extractedorthogonally to scan lines, i. e. in slow scan direction, areaffected by various types of eye motion occurring throughout the full,multi-second volume acquisition time. The most relevant types of eyemovements during acquisition are (micro-)saccades, which can introducediscontinuities or gaps between B-scans, and slow drifts, which causesmall, slowly changing distortion [4]. Additional eye motion is caused by pulsatile blood flow,respiration and head motion. Despite ongoing advances in instrumentscanning speed [5,6] typical volume acquisition times havenot decreased. Instead, the additional scanning speed is used for densevolumetric scanning or wider fields of view [7]. OCT angiography (OCTA) [811] multiplies therequired number of scans by at least two, and even more scans are neededto accommodate recent developments in blood flow speed estimation whichare based on multiple interscan times [12,13]. As a consequence,there is an ongoing need for improvement in motion compensation especiallyin pathology [1416].

    We develop novel methods for retrospective motion correction of OCT volume scans of the anterior and posterior eye, and widefield imaging. Our algorithms are clinically usable due to their suitability for patients with limited fixation capabilities and increased amount of motion, due to their fast processing speed, and their high accuracy, both in terms of alignment and motion correction. By merging multiple accurately aligned scans, image quality can be increased substantially, enabling the inspection of novel features.

  • Verbesserte Charakterisierung des Versagensverhaltens von Blechwerkstoffen durch den Einsatz von Mustererkennungsmethoden

    (Third Party Funds Single)

    Term: April 1, 2017 - March 31, 2019
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

2016

  • Intraoperative brain shift compensation and point-based vascular registration

    (Non-FAU Project)

    Term: May 1, 2016 - October 31, 2019
  • CODE

    (Third Party Funds Single)

    Term: December 1, 2016 - November 30, 2017
    Funding source: Industrie
  • Computer-based motive assessment

    (Own Funds)

    Term: since January 1, 2016

    The standard method of measuring motives -- coding imaginative stories for motivational themes -- places a heavy burden on researchers in terms of the time and the personnel invested in this task. We collaborate with colleagues from lingusitics and computer science on the development of computer-based, automated procedures for assessing implicit motivational needs in written text. To this purpose, we use standard psycholinguistic procedures such as the Lingusitic Inquiry and Word Count software as well as sophisticated pattern recognition approaches.

  • Digital, Semantic and Physical Analysis of Media Integrity

    (Third Party Funds Single)

    Term: May 24, 2016 - May 23, 2017
    Funding source: Industrie
  • Iterative Rekonstruktionsmethoden mit Fokus auf abdominelle MR-Bildgebung

    (Third Party Funds Single)

    Term: December 1, 2016 - April 30, 2017
    Funding source: Siemens AG
  • Medical Image Processing for Diagnostic Applications

    (Third Party Funds Single)

    Term: June 1, 2016 - May 31, 2017
    Funding source: Virtuelle Hochschule Bayern
  • Modelbasierte Röntgenbildgebung

    (Third Party Funds Single)

    Term: February 1, 2016 - January 31, 2019
    Funding source: Siemens AG
  • Nichtrigide Registrierung von 3D DSA mit präoperativen Volumendaten, um intraoperativen Brainshift bei offender Schädel-OP zu korrigieren

    (Third Party Funds Single)

    Term: June 1, 2016 - May 31, 2019
    Funding source: Siemens AG
  • Nutzung von Rohdaten-Redundanzen in der Kegelstrahl-CT

    (Third Party Funds Single)

    Term: May 1, 2016 - April 30, 2019
    Funding source: Siemens AG
  • Predicitve Prevention and personalized Interventional Stroke Therapy

    (Third Party Funds Group – Sub project)

    Overall project: Predicitve Prevention and personalized Interventional Stroke Therapy
    Term: January 1, 2016 - December 31, 2018
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
  • Quantifizierung der Fett-Säuren-Zusammensetzung in der Leber sowie Optimierung der zugehörigen Akquisitions- und Rekonstruktionstechniken

    (Third Party Funds Single)

    Term: June 15, 2016 - June 14, 2019
    Funding source: Siemens AG
  • Quantitative diagnostic dual energy CT with atlas-based prior knowledge

    (Third Party Funds Single)

    Term: January 1, 2016 - May 31, 2019
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

                                      During the last decade, dual energy CT (DECT) became widely available in clinical routine. Offered by all major CT manufacturers, with differing hard- and software concepts, the DECT applications are alike: the acquired DECT data are used to conduct two- or multi material decompositions, e.g. to separate iodine and bone from soft tissue or to quantify contrast agent and fat, to classify or characterize tissue, or to increase contrasts (CNR maximization), or to suppress contrasts (artifact reduction). The applications are designed to work with certain organs and the user needs to take care to invoke the correct application and to interpret its output only in the appropriate organ or anatomical region. E.g. interpreting the output of a kidney stone applications in organs other than the kidney will yield a wrong classification. To obtain quantitative results the applications require to set patient-specific parameters. In order to calibrate those the user is asked to place ROIs in predefined anatomical regions. Since this is time-consuming users are often tempted to use the default settings instead of optimizing them. Here, we want to develop a DECT atlas to utilize its anatomical (and functional) information for con-text-sensitive DECT imaging and material decomposition, and to be able to automatically calibrate the open parameters without the need of user interaction. To improve quantification the initial images shall not be reconstructed separately but rather undergo a rawdata-based decomposition before being converted into image domain. A dedicated user interface shall be developed to provide the new context-sensitive DECT information - such as automatically decomposing each organ into different but reasonable basis materials, for example - and to display it to the reader in a convenient way. Similarly, user-placed ROIs shall trigger a context-sensitive statistical evaluation of the ROI's contents and provide it to the user. This will help to quantify the iodine uptake in a tumor or a lesion, to separate it from fat or calcium components, to estimate its blood supply etc. Since the DECT data display the contrast uptake just for a given instance in time and since this contrast depends on patient-specific factors such as the cardiac output, we are planning to normalize the contrast uptake with the help of the dual energy information contained in the atlas. This will minimize inter and intra patient effects and increase the reproducibility. In addition, organ-specific material scores shall be developed that quantify a patient's material composition on an organ by organ basis. The new methods (DECT atlas, material decomposition, ...) shall be tested and evaluated using phantom and patient studies, and shall be optimized accordingly.                             

  • Studie zum Thema "Defektspalten/Defektreihen"

    (Third Party Funds Single)

    Term: August 1, 2016 - January 31, 2017
    Funding source: Industrie
  • Weiterentwicklung in der interferometrischen Röntgenbildgebung

    (Third Party Funds Single)

    Term: July 1, 2016 - June 30, 2019
    Funding source: Siemens AG
  • Workshop "Mobile eye imaging and remote diagnosis based on the captured image"

    (Third Party Funds Single)

    Term: October 10, 2016 - October 14, 2016
    Funding source: Industrie
  • Zusammenarbeit auf dem Gebiet der 3D-Modellierung von Koronararterien

    (Third Party Funds Single)

    Term: June 13, 2016 - December 31, 2017
    Funding source: Siemens AG
  • Zusammenarbeit auf dem Gebiet der Navigationsunterstützung in röhrenförmigen Strukturen

    (Third Party Funds Single)

    Term: June 1, 2016 - May 31, 2019
    Funding source: Siemens AG

2015

  • Auto ASPECTS

    (Third Party Funds Single)

    Term: December 1, 2015 - May 31, 2016
    Funding source: Industrie
  • Bildverbesserung der 4D DSA und Flußquantifizierung mittels 4D DSA

    (Third Party Funds Single)

    Term: June 15, 2015 - June 14, 2017
    Funding source: Siemens AG
  • Helical 3-D X-ray Dark-field Imaging

    (Third Party Funds Single)

    Term: April 1, 2015 - March 31, 2019
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    The dark-field signal of an X-ray phase-contrast system captures the small-angle scattering of microscopic structures. It is acquired using Talbot-Lau interferometer, and a conventional X-ray source and a conventional X-ray detector. Interestingly, the measured intensity of the dark-field signal depends on the orientation of the microstructure. Using algorithms from tomographic image reconstruction, it is possible to recover these structure orientations. The size of the imaged structures can be considerably smaller than the resolution of the used X-ray detector. Hence, it is possible to investigate structural properties of - for example - bones or soft tissue at an unprecedented level of detail.Existing methods for 3-D dark-field reconstruction require sampling in all three spatial dimensions. For practical use, this procedure is infeasible.The goal of the proposed project is to develop a system and a method for 3-D reconstruction of structure orientations based on measurements from a practical imaging trajectory. To this end, we propose to use a helical trajectory, a concept that has been applied in conventional CT imaging with tremendous success. As a result, it will be possible for the first time to compute dark-field volumes from a practically feasible, continuous imaging trajectory. The trajectory does not require multiple rotations of the object or the patient and avoids unnecessarily long path lengths.The project will be conducted in cooperation between the experimental physics and the computer science department. The project is composed of six parts:- A: Development of a 3-D cone-beam scattering projection model- B: Development of reconstruction algorithms for a helical dark-field imaging system- C: Evaluation and optimization of the reconstruction methods towards clinical applications- D: Design of an experimental helical imaging system- E: Setup of the helical imaging system- F: Evaluation and optimization of the system performanceParts A to C will be performed by the computer science department. Parts D to E will be conducted by the experimental physics department.
  • Endovaskuläre Versorgung von Aortenaneurysmen

    (Third Party Funds Single)

    Term: December 1, 2015 - November 30, 2018
    Funding source: Industrie
  • Feature-basierte Bildregistrierung für interventionelle Anwendungen

    (Third Party Funds Single)

    Term: July 1, 2015 - June 30, 2018
    Funding source: Siemens AG
  • Forschungskostenzuschuss Dr. Huang, Xiaolin

    (Third Party Funds Single)

    Term: June 1, 2015 - May 31, 2017
    Funding source: Alexander von Humboldt-Stiftung
  • Kalibrierung von Time-of-Flight Kameras

    (Third Party Funds Single)

    Term: October 1, 2015 - March 31, 2017
    Funding source: Stiftungen
  • Consistency Conditions for Artifact Reduction in Cone-beam CT

    (Third Party Funds Single)

    Term: since January 1, 2015
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    Tomographic reconstruction is the enabling technology for a wide range of transmission-based 3D imaging modalities, most notably X-ray Computed Tomography (CT). The first CT scanners built in the 70ies used parallel geometries. In order to speed up acquisition, the systems soon moved to fan-beam geometries and a much faster rotation speed. Today´s CT systems rotate four times per second and use a cone-beam geometry. This is fast enough to cover even complex organ motion such as the beating heart. However, there exists a large class of specialized CT systems that are not able to perform such fast scans. Systems using a flat-panel detector, as they are employed in C-arm angiography systems, on-board imaging systems in radiation therapy, or mobile C-arm systems, face mechanical challenges as they were mainly built to perform 2D imaging. About 15 years ago flat-detector scanners have been enabled to acquire three dimensional data. 3D imaging on these systems, however, is challenging due to their slower acquisition speed between five seconds and one minute and a small field-of-view (FOV) with a diameter of 25 to 40 cm. These drawbacks are related to the scanners´ design as highly specialized modalities. In contrast to other disadvantages of flat panel detectors like increased X-ray scattering and limited dynamic range, they will not be remedied by hardware evolution in the foreseeable future, e.g. faster motion is impossible because of the risk of collisions in the operation room. As a result, Flat-Detector Computed Tomography (FDCT) will continue to be more susceptible to artifacts in the reconstructed image due to motion and truncation. The goal of this project is to extend existing data consistency conditions, which can be practically used for FDCT to remedy intrinsic weaknesses of FDCT imaging, most importantly motion and truncation. Our goal is the practical applicability on clinical data. Thus, the new algorithms will be tested on physical phantom and patient data acquired on real FDCT scanners of our project partners. Our long-term vision behind this project is to find a concise and complete formulation for all redundancies within FDCT projection data for general trajectories and fully exploit them in the reconstruction process. Redundancies are inherent to every FDCT scan done in today´s clinical practice, but they are ignored entirely as a source of information. Data consistency conditions do not require any additional effort during the acquisitions and only little prior knowledge such as the object extent or even no assumptions about the underlying object, unlike for example total variation regularization in iterative reconstruction. Hence, they rely solely on information which is naturally present in the data.
  • Kontrast- und katheterbasierte 3D-/2D-Registrierung

    (Third Party Funds Single)

    Term: January 1, 2015 - September 30, 2015
    Funding source: Siemens AG
  • Segmentierung von MR-Daten in der Herzbildgebung zur Verwendung bei Interventionen an Angiographiegeräten

    (Third Party Funds Single)

    Term: October 1, 2015 - September 30, 2018
    Funding source: Siemens AG
  • Verbesserung Freiraumerkennung/fusion im Grid: Odometrie aus Umgebungssensoren

    (Third Party Funds Single)

    Term: June 12, 2015 - May 31, 2016
    Funding source: Industrie
  • Weight-Bearing Image of the Knee Using A-Arm CT

    (Third Party Funds Single)

    Term: April 1, 2015 - February 28, 2019
    Funding source: National Institutes of Health (NIH)

2014

  • 4D Herzbildgebung

    (Third Party Funds Single)

    Term: February 1, 2014 - January 31, 2018
    Funding source: Siemens AG
  • Automatic classification and image analysis of confocal laser endomicroscopy images

    (Own Funds)

    Term: since October 1, 2014

    The goal of this project is to detect cancerous tissue in confocal lasermicroendoscopy (CLE) images of the oral cavity and the vocal cord. The current treatment of these diseases is a histological analysis of specimen and a surgical resection, which has a rather high long-term survival rate, or radiation therapy with a lower survival rate. An early detection of cancerous tissue could lead to a lowered complication rate for further treatment, as well as a better overall prognosis for patients. Further, an in-vivo diagnosis during operation could narrow down the area for the necessary surgical excision, which is especially beneficial for cancer of the vocal cords.

    For this reason, we are applying methods of pattern recognition to facilitate and support diagnosis. We were able to show that these can be applied with high accuracies on CLE images.

  • Magnetresonanz am Herzen

    (Third Party Funds Single)

    Term: March 1, 2014 - June 30, 2017
    Funding source: Siemens AG

2013

  • Bewegungskompensation für Überlagerungen in der interventionellen C-Bogen Bildgebung

    (Third Party Funds Single)

    Term: June 1, 2013 - November 30, 2016
    Funding source: Siemens AG

2012

  • RTG 1773: Heterogeneous Image Systems, Project C1

    (Third Party Funds Group – Sub project)

    Overall project: GRK 1773: Heterogene Bildsysteme
    Term: October 1, 2012 - March 31, 2017
    Funding source: DFG / Graduiertenkolleg (GRK)
    Especially in aging populations, Osteoarthritis (OA) is one of the leading causes for disability and functional decline of the body. Yet, the causes and progression of OA, particularly in the early stages, remain poorly understood. Current OA imaging measures require long scan times and are logistically challenging. Furthermore they are often insensitive to early changes of the tissue.

    The overarching goal of this project is the development of a novel computed tomography imaging system allowing for an analysis of the knee cartilage and menisci under weight-bearing conditions. The articular cartilage deformation under different weight-bearing conditions reveals information about abnormal motion patterns, which can be an early indicator for arthritis. This can help to detect the medical condition at an early stage.

    To allow for a scan in standing or squatting position, we opted for a C-arm CT device that can be almost arbitrarily positioned in space. The standard application area for C-arm CT is in the interventional suite, where it usually acquires images using a vertical trajectory around the patient. For the recording of the knees in this project, a horizontal trajectory has been developed.

    Scanning in standing or squatting position makes an analysis of the knee joint under weight-bearing conditions possible. However, it will also lead to involuntary motion of the knees during the scan. The motion will result in artifacts in the reconstruction that reduce the diagnostic image quality. Therefore, the goal of this project is to estimate the patient motion during the scan to reduce these artifacts. One approach is to compute the motion field of the knee using surface cameras and use the result for motion correction. Another possible approach is the design and evaluation of a biomechanical model of the knee using inertial sensors to compensate for movement.

    After the correction of the motion artifacts, the reconstructed volume is used for the segmentation and quantitative analysis of the knee joint tissue. This will give information about the risk or the progression of an arthrosis disease.

     

2010

  • Automatische Analyse von Lautbildungsstörungen bei Kindern und Jugendlichen mit Lippen-Kiefer-Gaumenspalten (LKG)

    (Third Party Funds Single)

    Term: April 1, 2010 - March 31, 2013
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    Zur Bewertung von Sprechstörungen von Patienten mit Lippen-Kiefer-Gaumenspalten fehlen bisher objektive, validierte und einfache Verfahren. Im klinischen Alltag werden Lautbildungsstörungen bisher üblicherweise durch eine subjektive, auditive Bewertung erfasst. Diese ist für die klinische und v.a. wissenschaftliche Nutzung nur bedingt geeignet. Die automatische Sprachanalyse, wie sie für Spracherkennungssysteme genutzt wird, hat sich bereits bei Stimmstörungen als objektive Methode der globalen Bewertung erwiesen, nämlich zur Quantifizierung der Verständlichkeit. Dies ließ sich in Vorarbeiten auch auf Sprachaufnahmen von Kindern mit Lippen-Kiefer-Gaumenspalten übertragen. In dem vorliegenden Projekt wird ein Verfahren zur automatischen Unterscheidung und Quantifizierung verschiedener typischer Lautbildungsstörung wie Hypernasalität, Verlagerung der Artikulation und Veränderung der Artikulationsspannung bei Kindern und Jugendlichen mit Lippen-Kiefer-Gaumenspalten entwickelt und validiert. Dies stellt die Basis für die Ermittlung ihres Einflusses auf die Verständlichkeit sowie zur Erfassung der Ergebnisqualität verschiedener therapeutischer Konzepte dar.

2021

Journal Articles

Book Contributions

Conference Contributions

Miscellaneous

2020

Journal Articles

Book Contributions

Conference Contributions

Miscellaneous

2019

Journal Articles

Book Contributions

Conference Contributions

Miscellaneous

2018

Authored Books

Journal Articles

Book Contributions

Edited Volumes

Conference Contributions

Miscellaneous

2017

Journal Articles

Book Contributions

Conference Contributions