ODEUROPA: Negotiating Olfactory and Sensory Experiences in Cultural Heritage Practice and Research
(Third Party Funds Group – Sub project)Overall project: ODEUROPA
Term: January 1, 2021 - December 31, 2022
Funding source: EU - 8. Rahmenprogramm - Horizon 2020
Our senses are gateways to the past. Although museums are slowly discovering the power of multi-sensory presentations, we lack the scientific standards, tools and data to identify, consolidate, and promote the wide-ranging role of scents and smelling in our cultural heritage. In recent years, European cultural heritage institutions have invested heavily in large-scale digitization. A wealth of object, text and image data that can be analysed using computer science techniques now exists. However, the potential olfactory descriptions, experiences, and memories that they contain remain unexplored. We recognize this as both a challenge and an opportunity. Odeuropa will apply state-of-the-art AI techniques to text and image datasets that span four centuries of European history. It will identify the vocabularies, spaces, events, practices, and emotions associated with smells and smelling. The project will curate this multi-modal information, following semantic web standards, and store the enriched data in a ‘European Olfactory Knowledge Graph’ (EOKG). We will use this data to identify ‘storylines’, informed by cultural history and heritage research, and share these with different audiences in different formats: through demonstrators, an online catalogue, toolkits and training documentation describing best-practices in olfactory museology. New, evidence-based methodologies will quantify the impact of multisensory visitor engagement. This data will support the implementation of policy recommendations for recognising, promoting, presenting and digitally preserving olfactory heritage. These activities will realize Odeuropa’s main goal: to show that smells and smelling are important and viable means for consolidating and promoting Europe’s tangible and intangible cultural heritage.
Intelligent MR Diagnosis of the Liver by Linking Model and Data-driven Processes (iDELIVER)
(Third Party Funds Single)Term: August 3, 2020 - March 31, 2023
Funding source: Bundesministerium für Bildung und Forschung (BMBF)
The project examines the use and further development of machine learning methods for MR image reconstruction and for the classification of liver lesions. Based on a comparison model and data-driven image reconstruction methods, these are to be systematically linked in order to enable high acceleration without sacrificing diagnostic value. In addition to the design of suitable networks, research should also be carried out to determine whether metadata (e.g. age of the patient) can be incorporated into the reconstruction. Furthermore, suitable classification algorithms on an image basis are to be developed and the potential of direct classification on the raw data is to be explored. In the long term, intelligent MR diagnostics can significantly increase the efficiency of use of MR hardware, guarantee better patient care and set new impulses in medical technology.
Bereitstellung einer Infrastruktur zur Nutzung für die Ausbildung Studierender auf einem z/OS Betriebssystem der Fa. IBM
(FAU Funds)Term: April 2, 2020 - March 31, 2025
Funding source: Friedrich-Alexander-Universität Erlangen-Nürnberg
Molecular Assessment of Signatures ChAracterizing the Remission of Arthritis
(Third Party Funds Single)Term: April 1, 2020 - September 30, 2022
Funding source: Bundesministerium für Bildung und Forschung (BMBF)
MASCARA zielt auf eine detaillierte, molekulare Charakterisierung der Remission bei Arthritis ab. Das Projekt basiert auf der kombinierten klinischen und technischen Erfahrung von Rheumatologen, Radiologen, Medizinphysikern, Nuklearmedizinern, Gastroenterologen, grundlagenwissenschaftlichen Biologen und Informatikern und verbindet fünf akademische Fachzentren in Deutschland. Das Projekt adressiert 1) den Umstand der zunehmenden Zahl von Arthritis Patienten in Remission, 2) die Herausforderungen, eine effektive Unterdrückung der Entzündung von einer Heilung zu unterscheiden und 3) das begrenzte Wissen über die Gewebeveränderungen in den Gelenken von Patienten mit Arthritis. MASCARA wird auf der Grundlage vorläufiger Daten vier wichtige mechanistische Bereiche (immunstoffwechselbedingte Veränderungen, mesenchymale Gewebereaktionen, residente Immunzellen und Schutzfunktion des Darms) untersuchen, die gemeinsam den molekularen Zustand der Remission bestimmen. Das Projekt zielt auf die Sammlung von Synovialbiopsien und die anschließende Gewebeanalyse bei Patienten mit aktiver Arthritis und Patienten in Remission ab. Die Gewebeanalysen umfassen (Einzelzell)-mRNA-Sequenzierung, Massenzytometrie sowie die Messung von Immunmetaboliten und werden durch molekulare Bildgebungsverfahren wie CEST-MRT und FAPI-PET ergänzt. Sämtliche Daten, die in dem Vorhaben generiert werden, werden in einem bereits bestehenden Datenbanksystem mit den Daten der anderen Partner zusammengeführt und gespeichert. Das Zusammenführen der Daten soll – mit Hilfe von maschinellem Lernen – krankheitsspezifische und mit der Krankheitsaktivität verbundene Mustermatrizen identifizieren.
Deep-Learning basierte Segmentierung und Landmarkendetektion auf Röntgenbildern für unfallchirurgische Eingriffe
(Third Party Funds Single)Term: since May 6, 2019
Funding source: Siemens AG
Improving multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information
(Non-FAU Project)Term: April 1, 2019 - April 30, 2022
This project aims to improve multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information. Such improvements include noise reduction and artifact removal from data acquired in SPECT.
Advancing osteoporosis medicine by observing bone microstructure and remodelling using a four-dimensional nanoscope
(Third Party Funds Single)Term: April 1, 2019 - March 31, 2025
Funding source: European Research Council (ERC)
Due to Europe's ageing society, there has been a dramatic increase in the occurrence of osteoporosis (OP) and related diseases. Sufferers have an impaired quality of life, and there is a considerable cost to society associated with the consequent loss of productivity and injuries. The current understanding of this disease needs to be revolutionized, but study has been hampered by a lack of means to properly characterize bone structure, remodeling dynamics and vascular activity. This project, 4D nanoSCOPE, will develop tools and techniques to permit time-resolved imaging and characterization of bone in three spatial dimensions (both in vitro and in vivo), thereby permitting monitoring of bone remodeling and revolutionizing the understanding of bone morphology and its function.
To advance the field, in vivo high-resolution studies of living bone are essential, but existing techniques are not capable of this. By combining state-of-the art image processing software with innovative 'precision learning' software methods to compensate for artefacts (due e.g. to the subject breathing or twitching), and innovative X-ray microscope hardware which together will greatly speed up image acquisition (aim is a factor of 100), the project will enable in vivo X-ray microscopy studies of small animals (mice) for the first time. The time series of three-dimensional X-ray images will be complemented by correlative microscopy and spectroscopy techniques (with new software) to thoroughly characterize (serial) bone sections ex vivo.
The resulting three-dimensional datasets combining structure, chemical composition, transport velocities and local strength will be used by the PIs and international collaborators to study the dynamics of bone microstructure. This will be the first time that this has been possible in living creatures, enabling an assessment of the effects on bone of age, hormones, inflammation and treatment.
Deep Learning based Noise Reduction for Hearing Aids
(Third Party Funds Single)Term: February 1, 2019 - January 31, 2022
Funding source: Industrie
Reduction of unwanted environmental noises is an important feature of today’s hearing aids, which is why noise reduction is nowadays included in almost every commercially available device. The majority of these algorithms, however, is restricted to the reduction of stationary noises. Due to the large number of different background noises in daily situations, it is hard to heuristically cover the complete solution space of noise reduction schemes. Deep learning-based algorithms pose a possible solution to this dilemma, however, they sometimes lack robustness and applicability in the strict context of hearing aids.
In this project we investigate several deep learning.based methods for noise reduction under the constraints of modern hearing aids. This involves a low latency processing as well as the employing a hearing instrument-grade filter bank. Another important aim is the robustness of the developed methods. Therefore, the methods will be applied to real-world noise signals recorded with hearing instruments.
Magnetic Resonance Imaging Contrast Synthesis
(Non-FAU Project)Term: since January 1, 2019
Research project in cooperation with Siemens Healthineers, Erlangen
A Magnetic Resonance Imaging (MRI) exam typically consists of several MR pulse sequences that yield different image contrasts. Each pulse sequence is parameterized through multiple acquisition parameters that influence MR image contrast, signal-to-noise ratio, acquisition time, and/or resolution.
Depending on the clinical indication, different contrasts are required by the radiologist to make a reliable diagnosis. This complexity leads to high variations of sequence parameterizations across different sites and scanners, impacting MR protocoling, AI training, and image acquisition.
MR Image Synthesis
The aim of this project is to develop a deep learning-based approach to generate synthetic MR images conditioned on various acquisition parameters (repetition time, echo time, image orientation). This work can support radiologists and technologists during the parameterization of MR sequences by previewing the yielded MR contrast, can serve as a valuable tool for radiology training, and can be used for customized data generation to support AI training.
MR Image-to-Image Translations
As MR acquisition time is expensive, and re-scans due to motion corruption or a premature scan end for claustrophobic patients may be necessary, a method to synthesize missing or corrupted MR image contrasts from existing MR images is required. Thus, this project aims to develop an MR contrast-aware image-to-image translation method, enabling us to synthesize missing or corrupted MR images with adjustable image contrast. Additionally, it can be used as an advanced data augmentation technique to synthesize different contrasts for the training of AI applications in MRI.
Digitalization in clinical settings using graph databases
(Non-FAU Project)Term: since October 1, 2018
Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)
In clinical settings, different data is stored in different systems. These data are very heterogeneous, but still highly interconnected. Graph databases are a good fit for this kind of data: they contain heterogeneous "data nodes" which can be connected to each other. The basic question is now if and how clinical data can be used in a graph database, most importantly how clinical staff can profit from this approach. Possible scenarios are a graphical user interface for clinical staff for easier access to required information or an interface for evaluation and analysis to answer more complex questions. (e.g., "Were there similar patients to this patient? How were they treated?")
Deep Learning Applied to Animal Linguistics
(FAU Funds)Term: April 1, 2018 - April 1, 2022Deep Learning Applied to Animal Linguistics in particular the analysis of underwater audio recordings of marine animals (killer whales):
For marine biologists, the interpretation and understanding of underwater audio recordings is essential. Based on such recordings, possible conclusions about behaviour, communication and social interactions of marine animals can be made. Despite a large number of biological studies on the subject of orca vocalizations, it is still difficult to recognize a structure or semantic/syntactic significance of orca signals in order to be able to derive any language and/or behavioral patterns. Due to a lack of techniques and computational tools, hundreds of hours of underwater recordings are still manually verified by marine biologists in order to detect potential orca vocalizations. In a post process these identified orca signals are analyzed and categorized. One of the main goals is to provide a robust and automatic method which is able to automatically detect orca calls within underwater audio recordings. A robust detection of orca signals is the baseline for any further and deeper analysis. Call type identification and classification based on pre-segmented signals can be used in order to derive semantic and syntactic patterns. In connection with the associated situational video recordings and behaviour descriptions (provided by several researchers on site) can provide potential information about communication (kind of a language model) and behaviors (e.g. hunting, socializing). Furthermore, orca signal detection can be used in conjunction with a localization software in order to provide researchers on the field with a more efficient way of searching the animals as well as individual recognition.
For more information about the DeepAL project please contact firstname.lastname@example.org.
Former Projects from 2017 on
Modelling the progression of neurological diseases
(Third Party Funds Group – Sub project)Overall project: Training Network on Automatic Processing of PAthological Speech
Term: since May 1, 2018
Funding source: Innovative Training Networks (ITN)
Develop speech technology that can allow unobtrusive monitoring of many kinds of neurological diseases. The state of a patient can degrade slowly between medical check-ups. We want to track the state of a patient unobtrusively without the feeling of constant supervision. At the same time the privacy of the patient has to be respected. We will concentrate on PD and thus on acoustic cues of changes. The algorithms should run on a smartphone, track acoustic changes during regular phone conversations over time and thus have to be low-resource. No speech recognition will be used and only some analysis parameters of the conversation are stored on the phone and transferred to the server.
Machine Learning Applications in Magnetic Resonance Imaging beyond Image Acquisition and Interpretation
(Non-FAU Project)Term: since September 1, 2017
Research project in cooperation with Siemens Healthineers, Erlangen
Magnetic Resonance Imaging (MRI) is an important but complex imaging modality in current radiology. Artificial intelligence (AI) can play an important role for acclerating MR sequence acquisition as well as supporting image interpretation and diagnosis. However, there are also opportunities besides image acquisition and interpretation for which AI can play a vital role to optimze the clinical workflow and decrease costs.
One critical workflow step for an MRI exam is protocoling, i.e., selecting an adequate imaging protocol under consideration of the ordered procedure, clinical indication, and medical history. Due to the complexity of MRI exams and the heterogeneity of MR protocols, this is a nontrivial task. The aim of this project is to analyze and quantify challenges complicating a robust approach for automated protocoling, and propose solutions to these challenges.
Moreover, reporting and documentation is a crucial step in the radiology workflow. We have therefore automated the selection of billing codes from modality log data for an MRI exam. Integrated into the clinical environment, this work has the potential to free the technologist from a non-value adding administrative task, enhance the MRI workflow, and prevent coding errors.
Joint Iterative Reconstruction and Motion Compensation for Optical Coherence Tomography
(Third Party Funds Single)Term: since July 24, 2017
Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
Optical coherence tomography (OCT) is a non-invasive 3-D optical imagingmodality that is a standard of care in ophthalmology [1,2]. Since the introduction of Fourier-domain OCT , dramatic increases in imaging speedbecame possible, enabling 3-D volumetric data to be acquired. Typically, aregion of the retina is scanned line by line, where each scanned lineacquires a cross-sectional image or a B-scan. Since B-scans are acquiredin milliseconds, slices extracted along a scan line, or the fast scanaxis, are barely affected by motion. In contrast, slices extractedorthogonally to scan lines, i. e. in slow scan direction, areaffected by various types of eye motion occurring throughout the full,multi-second volume acquisition time. The most relevant types of eyemovements during acquisition are (micro-)saccades, which can introducediscontinuities or gaps between B-scans, and slow drifts, which causesmall, slowly changing distortion . Additional eye motion is caused by pulsatile blood flow,respiration and head motion. Despite ongoing advances in instrumentscanning speed [5,6] typical volume acquisition times havenot decreased. Instead, the additional scanning speed is used for densevolumetric scanning or wider fields of view . OCT angiography (OCTA) [8–11] multiplies therequired number of scans by at least two, and even more scans are neededto accommodate recent developments in blood flow speed estimation whichare based on multiple interscan times [12,13]. As a consequence,there is an ongoing need for improvement in motion compensation especiallyin pathology [14–16].
We develop novel methods for retrospective motion correction of OCT volume scans of the anterior and posterior eye, and widefield imaging. Our algorithms are clinically usable due to their suitability for patients with limited fixation capabilities and increased amount of motion, due to their fast processing speed, and their high accuracy, both in terms of alignment and motion correction. By merging multiple accurately aligned scans, image quality can be increased substantially, enabling the inspection of novel features.
Development of a digital therapy tool as an exercise supplement for speech disorders and facial paralysis
(Third Party Funds Single)Term: June 1, 2017 - December 31, 2019
Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)
Dysarthrien sind neurologisch bedingte, erworbene Störungen des Sprechens. Dabei sind vor allem die Koordination und Ausführung der Sprechbewegungen, aber auch die Mimik betroffen. Besonders häufig tritt eine Dysarthrie nach einem Schlaganfall, Schädel-Hirn-Trauma oder bei neurologischen Erkrankungen wie Parkinson auf.
Ähnlich wie in allen Sprechtherapien erfordert auch die Behandlung der Dysarthrie ein intensives Training. Anhaltende Effekte der Dysarthrie-Therapie stellen sich deshalb nur nach einem umfangreichen Behandlungsprogramm über mehrere Wochen hinweg ein. Bisher gibt es jedoch kaum Möglichkeiten zur Selbstkontrolle für Patienten noch therapeutische Anleitung in einem häuslichen Umfeld. Auch die Rückmeldung an Ärzte / Therapeuten über den Therapieerfolg ist eher lückenhaft.
Das Projekt DysarTrain setzt genau hier an und will ein interaktives, digitales Therapieangebot für das Sprechtraining schaffen, damit Patienten ihre Übungen im häuslichen Umfeld durchführen können. In enger Abstimmung mit Ärzten, Therapeuten und Patienten werden zuerst die passenden Therapieinhalte zur Behandlung von Dysarthrien ausgewählt und digitalisiert. In einem zweiten Schritt wird eine Therapieplattform mit den geeigneten Kommunikations-, Interaktions- und Supervisionsfunktionen aufgebaut. Für die Durchführung des Trainings werden anschließend Assistenzfunktionen und Feedbackmechanismen entwickelt. Das Programm soll automatisch rückmelden, ob eine Übung gut absolviert wurde und was ggf. noch verbessert werden kann. Eine automatisierte Auswertung der Therapiedaten erlaubt es Ärzten und Therapeuten, die Therapieform auf möglichst einfache Weise zu individualisieren und an den jeweiligen Therapiestand anzupassen. Dieses Angebot wird mit Ärzten, Therapeuten und Patienten in den Behandlungsprozess integriert und evaluiert.
Development of multi-modal, multi-scale imaging framework for the early diagnosis of breast cancer
(FAU Funds)Term: March 1, 2017 - June 30, 2020
Breast cancer is the leading cause of cancer related deaths in women, the second most common cancer worldwide. The development and progression of breast cancer is a dynamic biological and evolutionary process. It involves a composite organ system, with transcriptome shaped by gene aberrations, epigenetic changes, the cellular biological context, and environmental influences. Breast cancer growth and response to treatment has a number of characteristics that are specific to the individual patient, for example the response of the immune system and the interaction with the neighboring tissue. The overall complexity of breast cancer is the main cause for the current, unsatisfying understanding of its development and the patient’s therapy response. Although recent precision medicine approaches, including genomic characterization and immunotherapies, have shown clear improvements with regard to prognosis, the right treatment of this disease remains a serious challenge. The vision of the BIG-THERA team is to improve individualized breast cancer diagnostics and therapy, with the ultimate goal of extending the life expectancy of breast cancer sufferers. Our primary contribution in this regard is developing a multi-modal, multi-scale framework for the early diagnosis of the molecular sub-types of breast cancer, in a manner that supplements the clinical diagnostic workflow and enables the early identification of patients compatible with specific immunotherapeutic solutions.
Digital Pathology - New Approaches to the Automated Image Analysis of Histologic Slides
(Own Funds)Term: since January 16, 2017
The pathologist is still the gold standard in the diagnosis of diseases in tissue slides. Due to its human nature, the pathologist is on one side able to flexibly adapt to the high morphological and technical variability of histologic slides but of limited objectivity due to cognitive and visual traps.
In diverse project we are applying and validating currently available tools and solutions in digital pathology but are also developing new solution in automated image analysis to complement and improve the pathologist especially in areas of quantitative image analysis.
Deep Learning for Multi-modal Cardiac MR Image Analysis and Quantification
(Third Party Funds Single)Term: January 1, 2017 - May 1, 2020
Funding source: Deutscher Akademischer Austauschdienst (DAAD)
Cardiovascular diseases (CVDs) and other cardiac pathologies are the leading cause of death in Europe and the USA. Timely diagnosis and post-treatment follow-ups are imperative for improving survival rates and delivering high-quality patient care. These steps rely heavily on numerous cardiac imaging modalities, which include CT (computerized tomography), coronary angiography and cardiac MRI. Cardiac MRI is a non-invasive imaging modality used to detect and monitor cardiovascular diseases. Consequently, quantitative assessment and analysis of cardiac images is vital for diagnosis and devising suitable treatments. The reliability of quantitative metrics that characterize cardiac functions such as, myocardial deformation and ventricular ejection fraction, depends heavily on the precision of the heart chamber segmentation and quantification. In this project, we aim to investigate deep learning methods to improve the diagnosis and prognosis for CVDs,