Research Projects

Current Projects

  • Research on handwriting analysis, object tracking and segmentation based on machine learning


    (Own Funds)
    Term: November 16, 2023 - December 1, 2024

    As an important issue in the first step of digitizing scanned documents, this project will focus on line segmentation and text recognition. Line segmentation can be regarded as instance segmentation or polygon detection. In this work, we will first assess the performance of our recently proposed model: AMD-HookNet and HookFormer. After making a comparison with the current state-of-the-art line segmentation methods, deeper research based on these two baseline models is required. Both architecture improvements and novel global-local interaction strategies will be investigated. Furthermore, the text recognition technique will be developed as a unified end-to-end segmentation-free approach for addressing the unnecessary two-phrase recognition problem.

    This project will investigate state-of-the-art algorithms for achieving accurate and stable object tracking and segmentation. Nowadays, Siamese trackers dominate the tracking field. The balanced fast inference speed and relatively high performance have caught the researchers’ attention. However, Siamese trackers mostly rely on large dataset offline training to learn the general representative capability for an arbitrary given target object. This ignores the target context relationship from adjacent frames. In addition, both CNNs and ViTs are used as feature extractors while the combination of local fine-grained and global coarse representation is still unexplored. We will implement CNNs- and ViTs-based improvements on a baseline tracker (TransT or a pure ViT) and then evaluate them on several well-known public datasets to validate their effectiveness.


  • AI-refined thermo-hydraulic model for the improvement of the efficiency and quality of water supply


    (Third Party Funds Single)
    Term: November 1, 2023 - October 31, 2026
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)

    The United Nations' goals for sustainable development have made improving quality of life and access to clean drinking water a political priority. However, in recent decades, the water cycle in Bavaria has also been significantly affected by climate change. Two important aspects of daily drinking water supply and distribution are the assurance of water quality and the increase in usage efficiency. To enhance the resilience and capacity of the water supply in general, numerical simulation, data integration, and artificial intelligence (AI) are necessary. In this project, we aim to develop an AI-refined temperature-hydraulic model using heterogeneous data sources from a Bavarian water supply network. Hybrid AI methods are employed to model the complex relationship between water and soil temperature. The resulting model will serve as the basis for various real applications such as leak detection, anomaly recognition, and monitoring of drinking water quality, with the overarching goal of increasing the efficiency and quality of the water supply while simultaneously contributing to the containment of the impact of climate change on drinking water supply

  • Großräumige automatische Segmentierung der Kalbungsfront und Analyse der frontalen Ablation arktischer Gletscher mit Hilfe von Synthetic-Aperture-Radar-Bildsequenze


    (Third Party Funds Single)
    Term: October 1, 2023 - September 30, 2026
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Erfassung von Vögeln und Meeressäugetieren in Luftbildsequenzen mittels Verfahren der künstlichenIntelligenz


    (Third Party Funds Single)
    Term: July 1, 2023 - June 30, 2026
    Funding source: Bundesministerien
  • Self-Supervised Learning on Chest X-Rays to improve classification and localization


    (Non-FAU Project)
    Term: March 1, 2023 - March 1, 2026

    Chest X-Rays (CXR) serve as crucial diagnostic tools for pulmonary and cardiothoracic diseases, generating millions of images daily, a number on the rise due to decreasing acquisition costs. However, there's a pronounced scarcity of radiologists to interpret these images. Traditionally, CXR research has centered on enhancing classification accuracy, often achieving state-of-the-art results. Despite progress, there remain rare and intricate findings challenging for both human radiologists and AI systems to diagnose. Our investigation focuses on leveraging self-supervised image-text models to enhance the classification and localization of diverse findings. These self-supervised models eliminate the need for annotations, enabling the Deep Learning system to effectively learn from extensive public and private datasets.

  • An AI-based framework for visualizing and analyzing massive amounts of 4D tomography data for beamline end users


    (Third Party Funds Group – Overall project)
    Term: March 1, 2023 - February 28, 2026
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
    URL: https://foerderportal.bund.de/foekat/jsp/SucheAction.do?actionMode=view&fkz=05D23WE1

    Synchrotron tomography is characterized by extremely brilliant X-rays, which enables almost artifact-free imaging. Furthermore, very high resolution can be achieved by using special X-ray optics, and the special design of synchrotron facilities also allows fast in-situ experiments, i.e. 4D tomography.  The combination of these features enables high-resolution computed tomography on objects where conventional laboratory CT fails. At the same time, however, this also produces enormous amounts of data that are generally unprocessable by end users, pushing even the operators of synchrotrons to their limits.

    The goal of the KI4D4E project is to develop AI-based methods that can be used by end users to process the enormous amounts of data in such 4D CT measurements. This includes improving image quality through artifact reduction, reduction and accessibility of data to end users to help the latter interpret the results.

    The project focuses on the topics of artifact reduction, segmentation and visualization of large 4D data sets. The resulting methods should be applicable to data from both photon and neutron sources.

  • Maschinelles Lernen und Datenanalyse für heterogene, artübergreifende Daten (X02)


    (Third Party Funds Group – Sub project)
    Overall project: SFB 1540: Erforschung der Mechanik des Gehirns (EBM): Verständnis, Engineering und Nutzung mechanischer Eigenschaften und Signale in der Entwicklung, Physiologie und Pathologie des zentralen Nervensystems
    Term: January 1, 2023 - December 31, 2026
    Funding source: DFG / Sonderforschungsbereich (SFB)

    X02 nutzt die in EBM erzeugten Bilddaten und mechanischen Messungen, um Deep Learning-Methoden zu entwickeln, die Wissen über Spezies hinweg transferieren. In silico und in vitro Analysen werden deutlich spezifischere Daten liefern als in vivo Experimente, insbesondere für menschliches Gewebe. Um hier Erkenntnisse aus datenreichen Experimenten zu nutzen, werden wir Transfer Learning-Algorithmen für heterogene Daten entwickeln. So kann maschinelles Lernen auch unter stark datenlimitierten Bedingungen nutzbar gemacht werden. Ziel ist es, ein holistisches Verständnis von Bilddaten und mechanischen Messungen über Artgrenzen hinweg zu ermöglichen.

  • Temporally resolved 3-D retinal blood flow quantification using advanced motion correction and signal reconstruction in optical coherence tomography angiography


    (Third Party Funds Single)
    Term: since November 15, 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Die optische Kohärenztomographie (OCT) erzeugt volumetrische 3-D-Bilder von Gewebe mit Mikrometerauflösung, indem sie einen Laserstrahl zum Scannen verwendet und die Amplitude und Zeitverzögerung von zurückgestreutem Licht misst. Die OCT hat einen großen Einfluss auf die Augenheilkunde und wurde zu einer Standard-Bildgebungsmodalität für die Diagnose, die Überwachung des Krankheitsverlaufs und das Ansprechen auf die Behandlung sowie für die Untersuchung der Pathogenese von Krankheiten wie diabetischer Retinopathie, altersbedingter Makuladegeneration und Glaukom. Die jüngste Entwicklung der OCT-Angiographie (OCTA) hat die grundlegende und klinische Forschung dramatisch beschleunigt. OCTA führt eine tiefenaufgelöste (3-D) Bildgebung der retinalen Mikrovaskulatur durch, indem es wiederholt die gleiche Netzhautposition abbildet und den Bewegungskontrast von sich bewegenden Blutzellen erkennt. Im Vergleich zu herkömmlichen Ansätzen, die auf injizierten Kontrastmitteln basieren, hat OCTA den Vorteil, dass es nicht invasiv ist, sodass die Bildgebung bei jedem Patientenbesuch durchgeführt werden kann, was Längsschnittstudien ermöglicht. Allerdings hat OCTA auch einige Einschränkungen. Da eine wiederholte Bildgebung erforderlich ist, um den Blutfluss zu erkennen, sind die Aufnahmezeiten lang und die Daten können durch Augenbewegungen und Bildartefakte verzerrt werden, was eine quantitative Längsschnittanalyse erschwert. OCTA-Algorithmen können das Vorhandensein eines Blutflusses erkennen, sind jedoch nur begrenzt in der Lage, subtile Veränderungen des Flusses aufzulösen, die frühe Anzeichen einer Krankheit sein können. Zeitliche Schwankungen des Flusses, die durch den Herzzyklus oder die funktionelle Reaktion der Netzhaut verursacht werden, sind schwer zu untersuchen. Wir schlagen vor, ein neues Framework für OCTA zu entwickeln, das eine Bewegungskorrektur auf Kapillarebene ermöglicht, Blutflussgeschwindigkeiten differenziert und eine Analyse auf mehreren Zeitskalen ermöglicht (4-D OCTA). Die Fähigkeit, über die Visualisierung der Mikrovaskulatur hinauszugehen und den Fluss und seine zeitlichen Schwankungen zu beurteilen, ermöglicht die Beurteilung subtiler Beeinträchtigungen der mikrovaskulären Perfusion sowie des Herzzyklus und der Reaktion auf funktionelle Stimulation. In Kombination mit der vaskulären strukturellen Bildgebung versprechen diese Fortschritte, neue Krankheitsmarker in früheren Krankheitsstadien bereitzustellen, eine genauere Messung des Krankheitsverlaufs und des Ansprechens auf die Therapie in pharmazeutischen Studien zu ermöglichen und zur Aufklärung der Pathogenese bei Netzhauterkrankungen beizutragen.

  • ´Werck der bücher’ Transitions, experimentation, and collaboration in reprographic technologies, 1440–1470


    (Third Party Funds Single)
    Term: June 1, 2022 - May 31, 2025
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • UtilityTwin


    (Third Party Funds Group – Overall project)
    Term: September 1, 2021 - August 31, 2024
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)

    In the UtilityTwin research project, an intelligent digital twin for any energy or water supply network is to be researched and developed on the basis of adaptive high-resolution sensor data (down to the sub-second range) and machine learning techniques. Overall, the technology concepts BigData and AI are to be combined in an innovative way in this research project in order to make positive contributions to the implementation of the energy transition and to counteract climate change.

  • SmartCT - Erforschung und Entwicklung von Methoden der Künstlichen Intelligenz für ein autonomes Roboter-CT System zur 3D-Digitalisierung beliebiger Objekte


    (Third Party Funds Group – Overall project)
    Term: June 1, 2021 - May 31, 2024
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)
    URL: https://www.pinterguss.de/forschung-entwicklung/smart-ct.html

    In Vorhaben SmartCT sollen KI-Methoden entwickelt und angewendet werden, die Roboter-CT Systemen ermöglicht, selbstständig, also autonom, die äußeren und inneren Strukturen beliebiger Objekte zu digitalisieren. Diese so erzeugten Daten stellen die Basis von neuartigen, innovativen und datengetriebenen Geschäftsmodellen in vielen Bereichen wie Produktentwicklung, Produktion, Handel, Instandhaltung, Sicherheit und Recycling dar.
    Roboter-CT Systeme können beliebige Objekte (Fahrzeugkomponenten, Flugzeugflügel, Batterie-zellen, Versandpaket, etc.) zerstörungsfrei digitalisieren. Diese Systeme sind jedoch hoch komplex und deshalb bisher nur mit großem zeitlichen Aufwand von Experten bedien- und einsetzbar. Roboter-CT Systeme werden deshalb aktuell nur bei großen Unternehmen eingesetzt. Mit Hilfe der in SmartCT entwickelten KI-Methoden soll es möglich werden, dass beliebige Objekte effizient und wirtschaftlich attraktiv digitalisiert werden können, so dass auch kleinere und mittlere Unternehmen die Vorteile dieser Technologie vollständig zugänglich gemacht werden kann. Zugleich wird mit diesem Vorhaben die Akzeptanz roboterbasierter CT-Systeme in der Industrie nachhaltig erhöht.

  • Integratives Konzept zur personalisierten Präzisionsmedizin in Prävention, Früh-Erkennung, Therapie undRückfallvermeidung am Beispiel von Brustkrebs - DigiOnko


    (Third Party Funds Single)
    Term: October 1, 2020 - September 30, 2024
    Funding source: Bayerisches Staatsministerium für Gesundheit und Pflege, StMGP (seit 2018)
  • Integratives Konzept zur personalisierten Präzisionsmedizin in Prävention, Früh-Erkennung, Therapie und Rückfallvermeidung am Beispiel von Brustkrebs


    (Third Party Funds Single)
    Term: October 1, 2020 - September 30, 2024
    Funding source: Bayerisches Staatsministerium für Gesundheit und Pflege, StMGP (seit 2018)

    Breast cancer is one of the leading causes of death in the field of oncology in Germany. For the successful care and treatment of patients with breast cancer, a high level of information for those affected is essential in order to achieve a high level of compliance with the established structures and therapies. On the one hand, the digitalisation of medicine offers the opportunity to develop new technologies that increase the efficiency of medical care. On the other hand, it can also strengthen patient compliance by improving information and patient integration through electronic health applications. Thus, a reduction in mortality and an improvement in quality of life can be achieved. Within the framework of this project, digital health programmes are going to be created that support and complement health care. The project aims to provide better and faster access to new diagnostic and therapeutic procedures in mainstream oncology care, to implement eHealth models for more efficient and effective cancer care, and to improve capacity for patients in oncologcal therapy in times of crisis (such as the SARS-CoV-2 pandemic). The Chair of Health Management is conducting the health economic evaluation and analysing the extent to which digitalisation can contribute to a reduction in the costs of treatment and care as well as to an improvement in the quality of life of breast cancer patients.

  • Bereitstellung einer Infrastruktur zur Nutzung für die Ausbildung Studierender auf einem z/OS Betriebssystem der Fa. IBM


    (FAU Funds)
    Term: April 2, 2020 - March 31, 2025
    Funding source: Friedrich-Alexander-Universität Erlangen-Nürnberg
  • Advancing osteoporosis medicine by observing bone microstructure and remodelling using a fourdimensional nanoscope


    (Third Party Funds Group – Sub project)
    Overall project: Advancing osteoporosis medicine by observing bone microstructure and remodelling using a fourdimensional nanoscope
    Term: April 1, 2019 - March 31, 2025
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
  • Advancing osteoporosis medicine by observing bone microstructure and remodelling using a four-dimensional nanoscope


    (Third Party Funds Single)
    Term: April 1, 2019 - March 31, 2025
    Funding source: European Research Council (ERC)
    URL: https://cordis.europa.eu/project/id/810316

    Due to Europe's ageing society, there has been a dramatic increase in the occurrence of osteoporosis (OP) and related diseases. Sufferers have an impaired quality of life, and there is a considerable cost to society associated with the consequent loss of productivity and injuries. The current understanding of this disease needs to be revolutionized, but study has been hampered by a lack of means to properly characterize bone structure, remodeling dynamics and vascular activity. This project, 4D nanoSCOPE, will develop tools and techniques to permit time-resolved imaging and characterization of bone in three spatial dimensions (both in vitro and in vivo), thereby permitting monitoring of bone remodeling and revolutionizing the understanding of bone morphology and its function.

    To advance the field, in vivo high-resolution studies of living bone are essential, but existing techniques are not capable of this. By combining state-of-the art image processing software with innovative 'precision learning' software methods to compensate for artefacts (due e.g. to the subject breathing or twitching), and innovative X-ray microscope hardware which together will greatly speed up image acquisition (aim is a factor of 100), the project will enable in vivo X-ray microscopy studies of small animals (mice) for the first time. The time series of three-dimensional X-ray images will be complemented by correlative microscopy and spectroscopy techniques (with new software) to thoroughly characterize (serial) bone sections ex vivo.

    The resulting three-dimensional datasets combining structure, chemical composition, transport velocities and local strength will be used by the PIs and international collaborators to study the dynamics of bone microstructure. This will be the first time that this has been possible in living creatures, enabling an assessment of the effects on bone of age, hormones, inflammation and treatment.

Former Projects from 2017 on

  • Artificial Intelligent as a Market Participant – Implications for Antitrust Law


    (FAU Funds)
    Term: January 15, 2023 - January 14, 2024

    Introduction: Antitrust laws (also known ascompetition laws) are designed to encourage strong competition and are designedto protect consumers from predatory commercial practices. The primary goals ofantitrust law are to ensure the functioning of the markets and to ensure faircompetition. A prominent example of an antitrust violation is illegal pricefixing. By definition, it is an agreement between competitors that fixes pricesor other competitive conditions, and thus violates the principle of the pricingmechanism through free market forces. A typical feature of illegal price fixingis verifiable communication (written or verbal) between human marketparticipants. However, in the age of artificial intelligence and e-commerce,the definition and the detection of this illegal practice faces new challengesas collusive behaviors that violate antitrust laws, such as the pricingmechanism, can be partially or fully automated [1]. Furthermore, thecommunications between market participants can be both overt and covert. Finally,market participants can be artificial agents which might affected by perverseinstantiation [2]. In other words, new technological possibilities areavailable to disguise illegal pricing policies and business practices.

    Recent research, mainly from theeconomic and jurisprudence point of view, concludes the intensive applicationof AI algorithms in E-commerce will increase the extend of known forms ofanticompetitive behaviors [3][4]. However, the questions regarding whether andto which extent collusive behaviors will emerge by AI itself (which is anunknown form of anticompetitive behaviors) are rarely understood. Feasibilitystudies and comprehensive analysis comprising the implementation of AI methods andvalidation of the derived hypothesis has not been conducted so far. Therefore,the main goals of this project are to investigate the possibilities of collusivebehaviors stimulated and/or emerged by AI algorithms on digital marketplace andderive consequences on the antitrust law as well as competition policies. Tothe best of our knowledge, this is the first time that a research project inthe field of Antitrust and AI (AAI) is focusing on the mathematical andalgorithmic perspective of the question to which extend the utilization of AImethods is facilitating the collusive behaviors in the era of digital economy.

    Objectives: In order to validate the hypothesesthat AI algorithms is able to develop and communicate collusive behaviors ondigital marketplaces both in overt and covert fashion, comprehensive emulatorsof online marketplaces in different setups will be implemented.  Furthermore, different communication channels(both overt and covert) of digital marketplaces will be discovered and understood,as it is highly relevant to the detection of collusive practices. Finally, differentonline trading scenarios utilizing AI algorithms will be established and theimpact on antitrust law and competition polices will be derived. In total, the mainaspects in the intended DFG-application can be summarized as follows:

    1.      Asthe research topic belongs to a highly interdisciplinary field, a comprehensiveliterature review is necessary to define the problem space of the research andis of great importance to conduct the subsequent experiments successfully.Therefore, a comprehensive literature review on the aspects of antitrust law, gametheory, artificial intelligence and cyber security will be conducted.

    2.      Firststep of the implementation is the holistic emulation of the digitalmarketplace. The market emulator should have the capability to emulate the digitalmarket following various rules (e.g., Cournot vs. collusive competition) indifferent size (i.e., with different amount of market participants). Moreover,state-of-the-art algorithms for dynamic pricing should be replicated andintegrated into the market emulator as well.

    3.      Afurther aspect of this project is the communication mechanism in the era of E-commerceand AI. The know form of collusions mostly utilize overt communications.However, covert communication channels (i.e., communication channels that are notoriginally designed for the communication purpose, therefore hardly to bedetected [5][6]) poses further vulnerabilities of online marketplaces. The mechanismsand capacities of covert channels facilitating the collusive behaviors (e.g.,illegal price fixing) as should be investigated with the implemented marketemulator.

    4.      Finally,artificial agents for price definition of different products should be proposedand implemented following different competition models as well as marketcomplexities, aiming at understanding the central research questions of thisresearch project, i.e., capabilities and conditions of emerging collusivebehaviors of artificial agents by themselves. This particular step can beachieved by using reinforcement learning techniques. Technical opportunitiesand challenges for the discrimination of collusive and non-collusive behaviorsthat are potentially emerged by the artificial agents should be explored aswell.

    The entire project will besupervised by experts from three disciplines. Prof. Jochen Hoffmann (chair of Private Business Law) will support this research project with his knowledges and expertiseon antitrust law, Prof. Felix Feilling (chair of Cyber Security) will advise onthe aspects that are related to covert communication and cyber security, andProf. Andreas Maier (pattern recognition lab) will mentor this project from theAI point of view.

    [1] KünstlicheIntelligenz als Marktteilnehmer – Technische Möglichkeiten, Maier A., Bayer S.,Mohr Verlag. Submitted, unpublished.

    [2] BostromN. Superintelligence: Paths, Dangers, Strategies. Minds & Machines 25, Seite 285–289 (2015).

    [3] Petit, N. Antitrust andartificial intelligence: A research agenda. In: Journal of European CompetitionLaw and Practice. Vol. 8, Issue 6, pp. 361–362. Oxford University Press.(2017)

    [4] Beneke, F., Mackenrodt, M.,Remedies for algorithmic tacit collusion, Journal of Antitrust Enforcement,Volume 9, Issue 1, Pages 152–176 (2021).

    [5]Hans-Georg Eßer, Felix C. Freiling. Kapazitätsmessung eines verdecktenZeitkanals über HTTP, Univ. Mannheim,Technischer Bericht TR-2005-10, November 2005. (2005)

    [6] Freiling F.C., Schinzel S.Detecting Hidden Storage Side Channel Vulnerabilities in NetworkedApplications. In: Camenisch J., Fischer-Hübner S., Murayama Y., Portmann A.,Rieder C. (eds) Future Challenges in Security and Privacy for Academia andIndustry. SEC 2011. IFIP Advances in Information and Communication Technology,vol 354. Springer, Berlin, Heidelberg. (2011)

  • Font Group Recognition for Improved OCR


    (Third Party Funds Single)
    Term: August 1, 2021 - August 1, 2023
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Although OCR-D made huge progress in the last project phase in providing OCR for early printed books, it still faces two major problems: The huge variety of the material makes it extremely challenging to use generic OCR-models. Yet, selecting specific models is not possible as the sheer amount of material prevents a fully automatic workflow. This situation is further complicated by the lack of appropriate OCR training data. Current data sets consist overwhelmingly of texts in Fraktur, especially from the 19th century. This completely neglects the large typographic variety displayed by printing in the three previous centuries. Therefore, and in response to the demand from SLUB Dresden and ULB Halle, we propose to improve the current situation significantly1) fine tuning our font group recognition system to such a degree that it can be used at character level;2) transcribing more specific OCR training data for the 16th-18th century, which includes popular fonts such as Schwabacher, other bastards and old Fraktur styles; 3) training font-specific OCR models as well as integrated models that recognise both typeface and text simultaneously. This approach has ensured in other contexts that the network performs better on both individual tasks, as we can thus reduce overfitting during training. This project will improve OCR quality significantly, especially for books in non-Fraktur fonts. It will also provide a training data set of very high quality that can be reused in long term. Finally, the project will provide a more fine-grained font recognition tool that, beyond enabling font-specific OCR, also has important applications in text attribute recognition and layout analysis.

  • ODEUROPA: Negotiating Olfactory and Sensory Experiences in Cultural Heritage Practice and Research


    (Third Party Funds Group – Sub project)
    Overall project: ODEUROPA
    Term: January 1, 2021 - December 31, 2022
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
    URL: https://odeuropa.eu/

    Our senses are gateways to the past. Although museums are slowly discovering the power of multi-sensory presentations, we lack the scientific standards, tools and data to identify, consolidate, and promote the wide-ranging role of scents and smelling in our cultural heritage. In recent years, European cultural heritage institutions have invested heavily in large-scale digitization. A wealth of object, text and image data that can be analysed using computer science techniques now exists. However, the potential olfactory descriptions, experiences, and memories that they contain remain unexplored. We recognize this as both a challenge and an opportunity. Odeuropa will apply state-of-the-art AI techniques to text and image datasets that span four centuries of European history. It will identify the vocabularies, spaces, events, practices, and emotions associated with smells and smelling. The project will curate this multi-modal information, following semantic web standards, and store the enriched data in a ‘European Olfactory Knowledge Graph’ (EOKG). We will use this data to identify ‘storylines’, informed by cultural history and heritage research, and share these with different audiences in different formats: through demonstrators, an online catalogue, toolkits and training documentation describing best-practices in olfactory museology. New, evidence-based methodologies will quantify the impact of multisensory visitor engagement. This data will support the implementation of policy recommendations for recognising, promoting, presenting and digitally preserving olfactory heritage. These activities will realize Odeuropa’s main goal: to show that smells and smelling are important and viable means for consolidating and promoting Europe’s tangible and intangible cultural heritage.

  • Förderantrag zur Entwicklung des Kurses „Deep Learning for beginners“


    (Third Party Funds Single)
    Term: September 1, 2020 - August 31, 2021
    Funding source: Virtuelle Hochschule Bayern
  • Intelligent MR Diagnosis of the Liver by Linking Model and Data-driven Processes (iDELIVER)


    (Third Party Funds Single)
    Term: August 3, 2020 - March 31, 2023
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    The project examines the use and further development of machine learning methods for MR image reconstruction and for the classification of liver lesions. Based on a comparison model and data-driven image reconstruction methods, these are to be systematically linked in order to enable high acceleration without sacrificing diagnostic value. In addition to the design of suitable networks, research should also be carried out to determine whether metadata (e.g. age of the patient) can be incorporated into the reconstruction. Furthermore, suitable classification algorithms on an image basis are to be developed and the potential of direct classification on the raw data is to be explored. In the long term, intelligent MR diagnostics can significantly increase the efficiency of use of MR hardware, guarantee better patient care and set new impulses in medical technology.

  • From Micro To Macro: Multiscale Multimodal Data Analysis for Breast Cancer Research


    (Third Party Funds Single)
    Term: May 4, 2020 - May 5, 2023
    Funding source: Industrie

    From Micro To Macro: Multiscale Multimodal Data Analysis for Breast Cancer Research

  • Entwicklung eines Leitfadens zur dreidimensionalen zerstörungsfreien Erfassung von Manuskripten


    (Third Party Funds Single)
    Term: May 1, 2020 - April 30, 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    URL: https://gepris.dfg.de/gepris/projekt/433501541?context=projekt&task=showDetail&id=433501541&
    In the course of massive digitization, a large part of libraries’ archived documents are currently being converted into electronic formats. However, the digitization is also reaching its limits. Scanning robots cannot digitize documents whose condition due to natural ageing or external influences prohibit a conventional, optical based processing. Our own preliminary work has shown that the three imaging techniques X-ray Computed Tomography, Phase Contrast X-ray Computed Tomography and Terahertz Imaging are suitable for providing non-invasive insights into such documents, allow the acquisition of digital imaging information and are capable to re-enable an efficient automated process to digitalize cultural heritage documents.This research project is the first to develop a concrete digitization strategy or method for such documents. This structured evaluation will be based on a quality value that allows statements to be made about the expected result of digitization with one of the mentioned modalities for certain historical materials. From this, the most suitable imaging procedure can be determined. Based on these findings, a guideline for the digitization of fragile documents will be developed to predict the quality, feasibility and possible damage before a scan. In addition, algorithms will be developed that virtually process the generated data and make it readable for the human eye. Three concrete goals will be pursued to carry out the research project. By evaluating the modalities for selected historical materials, the most appropriate procedure for a specific document should then be identified. At the end of the project, a guide will be made available and the possibilities of each modality will be demonstrated by specifying material combinations and relevant parameters. It will be possible to test the variation of the recording parameters and to display exemplary results using the generated database. This also makes it possible to calculate a quality value. The basis for such a guide is the evaluation of the three modalities for relevant materials. For this purpose, realistic test specimen are produced. Both the scan quality and resolution as well as possible damage to the document must be considered. The guide will then be used to identify the most suitable procedure for a specific document. This statement is based on the mentioned quality value, which will also be used to predict the optimal digitization modality and the quality for an unknown document.The evaluation of several modalities as well as the development of algorithms are to be seen as central challenges of the research project. It will be possible to store endangered holdings in a digital format without destroying their structure through manual intervention. In the second funding phase, a multi-modal solution should be investigated in which disadvantages and limitations of individual modalities will be compensated by combining several modalities.
  • Verbesserte Dual Energy Bildgebung mittels Maschinellem Lernen


    (Third Party Funds Single)
    Term: April 1, 2020 - December 31, 2020
    Funding source: Europäische Union (EU)

    The project aims to develop novel and innovative methods to improve visualisation and use of dual energy CT (DECT) images. Compared to conventional single energy CT (SECT) scans, DECT contains a significant amount of additional quantitative information that enables tissue characterization far beyond what is possible with SECT, including material decomposition for quantification and labelling of specific materials within tissues, creation of reconstructions at different predicted energy levels, and quantitative spectral tissue characterization for tissue analysis. However, despite the many potential advantages of DECT, applications remain limited and in specizlized clinical settings. Some reasons are that many applications are specific for the organ under investigation, require additional, manual processing or calibration, and not easily manipulated using standard interactive contrast visualisation windows available in clinical viewing stations. This is a significant disadvantage compared to conventional SECT.
    In this project, we propose to develop new strategies to fuse and display the additional DECT information on a single contrast scale such that it can be visualised with the same interactive tools that radiologists are used to in their clinical routine. We will investigate non-linear manifold learning techniques like Laplacian Eigenmaps and the Sammon Mapping. Both allow extension using AI-based techniques like the newly developed user loss that allows to integrate user's opinions using forced choice experiments. This will allow a novel image contrast that will be compatible with interactive window and level functions that are rourintely used by radiologists. Furthermore, we aim at additional developments that will use deep neural networks to approximate the non-linear mapping function and to generate reconstructions that capture and display tissue specific spectral characteristics in a readily and universally useable manner for enhancing diagnostic value.

  • Molecular Assessment of Signatures ChAracterizing the Remission of Arthritis


    (Third Party Funds Single)
    Term: April 1, 2020 - September 30, 2022
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    MASCARA zielt auf eine detaillierte, molekulare Charakterisierung der Remission bei Arthritis ab. Das Projekt basiert auf der kombinierten klinischen und technischen Erfahrung von Rheumatologen, Radiologen, Medizinphysikern, Nuklearmedizinern, Gastroenterologen, grundlagenwissenschaftlichen Biologen und Informatikern und verbindet fünf akademische Fachzentren in Deutschland. Das Projekt adressiert 1) den Umstand der zunehmenden Zahl von Arthritis Patienten in Remission, 2) die Herausforderungen, eine effektive Unterdrückung der Entzündung von einer Heilung zu unterscheiden und 3) das begrenzte Wissen über die Gewebeveränderungen in den Gelenken von Patienten mit Arthritis. MASCARA wird auf der Grundlage vorläufiger Daten vier wichtige mechanistische Bereiche (immunstoffwechselbedingte Veränderungen, mesenchymale Gewebereaktionen, residente Immunzellen und Schutzfunktion des Darms) untersuchen, die gemeinsam den molekularen Zustand der Remission bestimmen. Das Projekt zielt auf die Sammlung von Synovialbiopsien und die anschließende Gewebeanalyse bei Patienten mit aktiver Arthritis und Patienten in Remission ab. Die Gewebeanalysen umfassen (Einzelzell)-mRNA-Sequenzierung, Massenzytometrie sowie die Messung von Immunmetaboliten und werden durch molekulare Bildgebungsverfahren wie CEST-MRT und FAPI-PET ergänzt. Sämtliche Daten, die in dem Vorhaben generiert werden, werden in einem bereits bestehenden Datenbanksystem mit den Daten der anderen Partner zusammengeführt und gespeichert. Das Zusammenführen der Daten soll – mit Hilfe von maschinellem Lernen – krankheitsspezifische und mit der Krankheitsaktivität verbundene Mustermatrizen identifizieren.

  • Intelligente MR-Diagnostik der Leber durch Verknüpfung modell- und datengetriebener Verfahren


    (Third Party Funds Single)
    Term: April 1, 2020 - March 31, 2023
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
  • Automatic exposure control (AEC) for CT based on neural network-driven patient-specific real-time assessment of dose distributions and minimization of the effective dose


    (Third Party Funds Single)
    Term: April 1, 2020 - March 31, 2023
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
    Modern diagnostic CT systems comprise a variety of measures to keep patient dose at a minimum. Of particular importance is the tube current modulation (TCM) technique that automatically adapts the tube current for each projection in a way to minimize the tube current time product (mAs-product) for a given image quality. This can also be regarded as maximizing the image quality for a given mAs-product. TCM performs a modulation depending on the angular position of the x-ray tube and depending on the z-position of the scan. Measures related to TCM are the automated choice of the mean tube current and of the optimal tube voltage. These three dose reduction methods are also known under the term automatic exposure control (AEC).As of today, however, the AEC does not minimize the actual patient dose and thereby the actual patient risk. It rather minimizes surrogates thereof. The surrogate of TCM is the mAs-product. The surrogate used to automatically select the tube voltage is the CTDI value or the dose length product (DLP). A direct minimization of the weighted summed organ dose values and thereby the patient risk is currently not practicable due to a) the very high computation times of the Monte Carlo dose calculation algorithms and b) due to the lack of a reliable segmentation of the radiation sensitive and risk-relevant organs.We therefore plan to use artificial neural networks to solve the above-mentioned problems and to realize a new AEC which is capable of directly minimizing patient dose and risk instead of using surrogate parameters. A first neural net will convert the patient topogram(s) into a coarse estimate of the CT volume. In cases where only a single topogram is available information of the table height will be used for this estimation. A second neural net will segment the relevant organs. We can here partially use prior work of a previous DFG project (KA 1678/20, LE 2763/2, MA 4898/5). A third network will use further scan parameters (table increment, pitch value, rotation time, collimation, tube voltage, …) to compute the expected dose distribution per projection. This, together with the segmentation of the organs, shall be used to compute the effective dose (or risk) or the patient per projection. A minimization algorithm will then find the optimal tube current curve that minimizes patient risk at a given image quality or that maximizes image quality at a given patient risk.To evaluate our deep AEC algorithm diagnostic CT data will be collected. The data will be retrospectively converted to the desired tube current curve by adding noise to the rawdata followed by another reconstruction. Experienced radiologists will then perform a blinded study where they read images produced without AEC, with the conventional AEC as of today, and with our new deep AEC algorithm.
  • Kommunikation und Sprache im Reich. Die Nürnberger Briefbücher im 15. Jahrhundert: Automatische Handschriftenerkennung - historische und sprachwissenschaftliche Analyse.


    (Third Party Funds Single)
    Term: October 1, 2019 - September 30, 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Kommunikation und Sprache im Reich. Die Nürnberger Briefbücher im 15. Jahrhunddert: Automatische Handschriftenerkennung - historische und sprachwissenschaftliche Analyse


    (Third Party Funds Single)
    Term: October 1, 2019 - September 30, 2022
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Kombinierte Iterative Rekonstruktion und Bewegungskompensation für die Optische Kohärenz Tomographie-Angiographie


    (Third Party Funds Single)
    Term: June 1, 2019 - May 31, 2021
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Deep-Learning basierte Segmentierung und Landmarkendetektion auf Röntgenbildern für unfallchirurgische Eingriffe


    (Third Party Funds Single)
    Term: since May 6, 2019
    Funding source: Siemens AG
  • Improving multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information


    (Non-FAU Project)
    Term: April 1, 2019 - April 30, 2022

    This project aims to improve multi-modal quantitative SPECT with Deep Learning approaches to optimize image reconstruction and extraction of medical information. Such improvements include noise reduction and artifact removal from data acquired in SPECT.

  • ICONOGRAPHICS: Computational Understanding of Iconography and Narration in Visual Cultural Heritage


    (FAU Funds)
    Term: April 1, 2019 - March 31, 2021

    The interdisciplinary research project Iconographics is dedicated to innovative possibilities of digital image recognition for the arts and humanities. While computer vision is already often able to identify individual objects or specific artistic styles in images, the project is confronted with the open problem of also opening up the more complex image structures and contexts digitally. On the basis of a close interdisciplinary collaboration between Classical Archaeology, Christian Archaeology, Art History and the Computer Sciences, as well as joint theoretical and methodological reflection, a large number of multi-layered visual works will be analyzed, compared and contextualized. The aim is to make the complex compositional, narrative and semantic structures of these images tangible for computer vision.

    Iconography and Narratology are identified as a challenging research questions for all subjects of the project. The iconography will be interpreted in its plot, temporality, and narrative logic. Due to its complex cultural structure; we selected four important scenes:

    1. The Annunciation of the Lord
    2. The Adoration of the Magi
    3. The Baptism of Christ
    4. Noli me tangere (Do not touch me)
  • Big Data of the Past for the Future of Europe


    (Third Party Funds Group – Sub project)
    Overall project: TIME MACHINE : BIG DATA OF THE PAST FOR THE FUTURE OF EUROPE
    Term: March 1, 2019 - February 29, 2020
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
  • Deep Learning based Noise Reduction for Hearing Aids


    (Third Party Funds Single)
    Term: February 1, 2019 - January 31, 2023
    Funding source: Industrie
     

    Reduction of unwanted environmental noises is an important feature of today’s hearing aids, which is why noise reduction is nowadays included in almost every commercially available device. The majority of these algorithms, however, is restricted to the reduction of stationary noises. Due to the large number of different background noises in daily situations, it is hard to heuristically cover the complete solution space of noise reduction schemes. Deep learning-based algorithms pose a possible solution to this dilemma, however, they sometimes lack robustness and applicability in the strict context of hearing aids.
    In this project we investigate several deep learning.based methods for noise reduction under the constraints of modern hearing aids. This involves a low latency processing as well as the employing a hearing instrument-grade filter bank. Another important aim is the robustness of the developed methods. Therefore, the methods will be applied to real-world noise signals recorded with hearing instruments.

  • PPP Brasilien 2019


    (Third Party Funds Single)
    Term: January 1, 2019 - December 31, 2020
    Funding source: Deutscher Akademischer Austauschdienst (DAAD)
  • Magnetic Resonance Imaging Contrast Synthesis


    (Non-FAU Project)
    Term: since January 1, 2019

    Research project in cooperation with Siemens Healthineers, Erlangen

    A Magnetic Resonance Imaging (MRI) exam typically consists of several MR pulse sequences that yield different image contrasts. Each pulse sequence is parameterized through multiple acquisition parameters that influence MR image contrast, signal-to-noise ratio, acquisition time, and/or resolution.

    Depending on the clinical indication, different contrasts are required by the radiologist to make a reliable diagnosis. This complexity leads to high variations of sequence parameterizations across different sites and scanners, impacting MR protocoling, AI training, and image acquisition.

    MR Image Synthesis

    The aim of this project is to develop a deep learning-based approach to generate synthetic MR images conditioned on various acquisition parameters (repetition time, echo time, image orientation). This work can support radiologists and technologists during the parameterization of MR sequences by previewing the yielded MR contrast, can serve as a valuable tool for radiology training, and can be used for customized data generation to support AI training.

    MR Image-to-Image Translations

    As MR acquisition time is expensive, and re-scans due to motion corruption or a premature scan end for claustrophobic patients may be necessary, a method to synthesize missing or corrupted MR image contrasts from existing MR images is required. Thus, this project aims to develop an MR contrast-aware image-to-image translation method, enabling us to synthesize missing or corrupted MR images with adjustable image contrast. Additionally, it can be used as an advanced data augmentation technique to synthesize different contrasts for the training of AI applications in MRI.

  • Artificial Intelligence for Reinventing European Healthcare


    (Third Party Funds Group – Sub project)
    Overall project: Artificial Intelligence for Reinventing European Healthcare
    Term: January 1, 2019 - December 31, 2019
    Funding source: EU - 8. Rahmenprogramm - Horizon 2020
  • Digitalization in clinical settings using graph databases


    (Non-FAU Project)
    Term: since October 1, 2018
    Funding source: Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie (StMWi) (seit 2018)

    In clinical settings, different data is stored in different systems. These data are very heterogeneous, but still highly interconnected. Graph databases are a good fit for this kind of data: they contain heterogeneous "data nodes" which can be connected to each other. The basic question is now if and how clinical data can be used in a graph database, most importantly how clinical staff can profit from this approach. Possible scenarios are a graphical user interface for clinical staff for easier access to required information or an interface for evaluation and analysis to answer more complex questions. (e.g., "Were there similar patients to this patient? How were they treated?")

  • Laboranalyse von Degradationsmechanismen unter beschleunigter Alterung und Entwicklung geeigneter feldtauglicher bildgebender Detektionsverfahren und Entwicklung und Evaluation eines Algorithmus zur Fehlerdetektion und Prognostizierung der Ausfallwahrscheinlichkeit


    (Third Party Funds Group – Overall project)
    Term: August 1, 2018 - July 31, 2021
    Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)
  • Analysis of Defects on Solar Power Cells


    (Third Party Funds Group – Sub project)
    Overall project: iPV 4.0: Intelligente vernetzte Produktion mittels Prozessrückkopplung entlang des Produktlebenszyklus von Solarmodulen
    Term: August 1, 2018 - July 31, 2021
    Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)

    Over the last decade, a large number of solar power plants have been installed in Germany. To ensure a high performance, it is necessary to detect defects early. Therefore, it is required to control the quality of the solar cells during the production process, as well as to monitor the installed modules. Since manual inspections are expensive, a large degree of automation is required.
    This project aims to develop a new approach to automatically detect and classify defects on solar power cells and to estimate their impact on the performance. Further, big data methods will be applied to identify circumstances that increase the probability of a cell to become defect. As a result, it will be possible to reject cells in the production that have a high likelihood to become defect.

  • Entwicklung eines Modellrepositoriums und einer Automatischen Schriftarterkennung für OCR-D


    (Third Party Funds Single)
    Term: July 1, 2018 - December 31, 2019
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Similarity learning for art analysis


    (Third Party Funds Group – Sub project)
    Overall project: Critical Catalogue of Luther Portraits (1519-1530)
    Term: June 1, 2018 - February 28, 2021
    Funding source: andere Förderorganisation
    URL: https://www.gnm.de/forschung/forschungsprojekte/luther-bildnisse/

    The analysis of the similarity of portraits is an important issue for many sciences such as art history or digital humanities, as for instance it might give hints concerning serial production processes, authenticity or temporal and contextual classification of the artworks.
    In the project, first algorithms will be developed for cross-genre and multi-modal registration of portraits to overlay digitized paintings and prints as well as paintings acquired with different imaging systems such as visual light photography and infrared reflectography. Then, methods will be developed to objectively analyze the portraits according to their similarity.
    This project is part of a joint project of the FAU, the Germanisches Nationalmuseum (GNM) in Nuremberg and the Technology Arts Sciences (TH Köln) in Cologne. Goal of the interdisciplinary project covering art history, art technology, reformation history and computer science is the creation of a critical catalogue of Luther portraits (1519-1530).

  • Critical Catalogue of Luther Portraits (1519-1530)


    (Third Party Funds Group – Overall project)
    Term: June 1, 2018 - May 31, 2021
    Funding source: andere Förderorganisation
    URL: https://www.gnm.de/forschung/projekte/luther-bildnisse/

    Quite a number of portraits of Martin Luther - known as media star of 16th century – can be found in today’s museums and libraries. However, how many portraits indeed exist and which of those are contemporary or actually date from a later period? So far unlike his works, however, the variety of contemporary portraits (painting and print) is neither completely collected nor critically analyzed. Thus, a joint project of the FAU, the Germanisches Nationalmuseum (GNM) in Nuremberg and the Technology Arts Sciences (TH Köln) was initiated. Goal of the interdisciplinary project covering art history, art technology, reformation history and computer science is the creation of a critical catalogue of Luther portraits (1519-1530). In particular, the issues of authenticity, dating of artworks and its historical usage context as well as the potential existence of serial production processes will be investigated.

  • Automatisiertes Intraoperatives Tracking zur Ablauf- und Dosisüberwachung in RöntgengestütztenMinimalinvasiven Eingriffen


    (Third Party Funds Group – Sub project)
    Overall project: Automatisiertes Intraoperatives Tracking zur Ablauf- und Dosisüberwachung in RöntgengestütztenMinimalinvasiven Eingriffen
    Term: June 1, 2018 - May 31, 2021
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
  • Automatic Intraoperative Tracking for Workflow and Dose Monitoring in X-Ray-based Minimally Invasive Surgeries


    (Third Party Funds Single)
    Term: June 1, 2018 - May 31, 2021
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)

    The goal of this project is the investigation of multimodal methods for the evaluation of interventional workflows in the operation room. This topic will be researched in an international project context with partners in Germany and in Brazil (UNISINOS in Porto Alegre). Methods will be developed to analyze the processes in an OR based on signals from body-worn sensors, cameras and other modalities like X-ray images recorded during the surgeries. For data analysis, techniques from the field of computer vision, machine learning and pattern recognition will be applied. The system will be integrated in a way that body-worn sensors developed by Portabiles as well as angiography systems produced by Siemens Healthcare can be included alongside.

  • Modelling the progression of neurological diseases


    (Third Party Funds Group – Sub project)
    Overall project: Training Network on Automatic Processing of PAthological Speech
    Term: since May 1, 2018
    Funding source: Innovative Training Networks (ITN)

    Develop speech technology that can allow unobtrusive monitoring of many kinds of neurological diseases. The state of a patient can degrade slowly between medical check-ups. We want to track the state of a patient unobtrusively without the feeling of constant supervision. At the same time the privacy of the patient has to be respected. We will concentrate on PD and thus on acoustic cues of changes. The algorithms should run on a smartphone, track acoustic changes during regular phone conversations over time and thus have to be low-resource. No speech recognition will be used and only some analysis parameters of the conversation are stored on the phone and transferred to the server.

  • Deep Learning Applied to Animal Linguistics


    (FAU Funds)
    Term: April 1, 2018 - April 1, 2022
    Deep Learning Applied to Animal Linguistics in particular the analysis of underwater audio recordings of marine animals (killer whales):

    For marine biologists, the interpretation and understanding of underwater audio recordings is essential. Based on such recordings, possible conclusions about behaviour, communication and social interactions of marine animals can be made. Despite a large number of biological studies on the subject of orca vocalizations, it is still difficult to recognize a structure or semantic/syntactic significance of orca signals in order to be able to derive any language and/or behavioral patterns. Due to a lack of techniques and computational tools, hundreds of hours of underwater recordings are still manually verified by marine biologists in order to detect potential orca vocalizations. In a post process these identified orca signals are analyzed and categorized. One of the main goals is to provide a robust and automatic method which is able to automatically detect orca calls within underwater audio recordings. A robust detection of orca signals is the baseline for any further and deeper analysis. Call type identification and classification based on pre-segmented signals can be used in order to derive semantic and syntactic patterns. In connection with the associated situational video recordings and behaviour descriptions (provided by several researchers on site) can provide potential information about communication (kind of a language model) and behaviors (e.g. hunting, socializing). Furthermore, orca signal detection can be used in conjunction with a localization software in order to provide researchers on the field with a more efficient way of searching the animals as well as individual recognition.

    For more information about the DeepAL project please contact christian.bergler@fau.de.
  • Moderner Zugang zu historischen Quellen


    (Third Party Funds Group – Sub project)
    Overall project: Moderner Zugang zu historischen Quellen
    Term: March 1, 2018 - February 28, 2021
    Funding source: andere Förderorganisation
  • Medical Image Processing for Interventional Applications


    (Third Party Funds Single)
    Term: January 1, 2018 - December 31, 2018
    Funding source: Virtuelle Hochschule Bayern
  • Radiologische und Genomische Datenanalyse zur Verbesserung der Brustkrebstherapie


    (Third Party Funds Single)
    Term: January 1, 2018 - December 31, 2019
    Funding source: Bundesministerium für Bildung und Forschung (BMBF)
  • 3-D Multi-Contrast CINE Cardiac Magnetic Resonance Imaging


    (Non-FAU Project)
    Term: October 1, 2017 - September 30, 2020
  • Machine Learning Applications in Magnetic Resonance Imaging beyond Image Acquisition and Interpretation


    (Non-FAU Project)
    Term: since September 1, 2017

    Research project in cooperation with Siemens Healthineers, Erlangen

    Magnetic Resonance Imaging (MRI) is an important but complex imaging modality in current radiology. Artificial intelligence (AI) can play an important role for acclerating MR sequence acquisition as well as supporting image interpretation and diagnosis. However, there are also opportunities besides image acquisition and interpretation for which AI can play a vital role to optimze the clinical workflow and decrease costs. 

    Automated Protocoling

    One critical workflow step for an MRI exam is protocoling, i.e., selecting an adequate imaging protocol under consideration of the ordered procedure, clinical indication, and medical history. Due to the complexity of MRI exams and the heterogeneity of MR protocols, this is a nontrivial task. The aim of this project is to analyze and quantify challenges complicating a robust approach for automated protocoling, and propose solutions to these challenges.

    Automated Billing

    Moreover, reporting and documentation is a crucial step in the radiology workflow. We have therefore automated the selection of billing codes from modality log data for an MRI exam. Integrated into the clinical environment, this work has the potential to free the technologist from a non-value adding administrative task, enhance the MRI workflow, and prevent coding errors.

  • Joint Iterative Reconstruction and Motion Compensation for Optical Coherence Tomography Angiography


    (Third Party Funds Single)
    Term: August 1, 2017 - July 31, 2019
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)

    Optical coherence tomography (OCT) is a non-invasive 3-D optical imagingmodality that is a standard of care in ophthalmology [1,2]. Since the introduction of Fourier-domain OCT [3], dramatic increases in imaging speedbecame possible, enabling 3-D volumetric data to be acquired. Typically, aregion of the retina is scanned line by line, where each scanned lineacquires a cross-sectional image or a B-scan. Since B-scans are acquiredin milliseconds, slices extracted along a scan line, or the fast scanaxis, are barely affected by motion. In contrast, slices extractedorthogonally to scan lines, i. e. in slow scan direction, areaffected by various types of eye motion occurring throughout the full,multi-second volume acquisition time. The most relevant types of eyemovements during acquisition are (micro-)saccades, which can introducediscontinuities or gaps between B-scans, and slow drifts, which causesmall, slowly changing distortion [4]. Additional eye motion is caused by pulsatile blood flow,respiration and head motion. Despite ongoing advances in instrumentscanning speed [5,6] typical volume acquisition times havenot decreased. Instead, the additional scanning speed is used for densevolumetric scanning or wider fields of view [7]. OCT angiography (OCTA) [811] multiplies therequired number of scans by at least two, and even more scans are neededto accommodate recent developments in blood flow speed estimation whichare based on multiple interscan times [12,13]. As a consequence,there is an ongoing need for improvement in motion compensation especiallyin pathology [1416].

    We develop novel methods for retrospective motion correction of OCT volume scans of the anterior and posterior eye, and widefield imaging. Our algorithms are clinically usable due to their suitability for patients with limited fixation capabilities and increased amount of motion, due to their fast processing speed, and their high accuracy, both in terms of alignment and motion correction. By merging multiple accurately aligned scans, image quality can be increased substantially, enabling the inspection of novel features.

  • Development of a digital therapy tool as an exercise supplement for speech disorders and facial paralysis


    (Third Party Funds Single)
    Term: June 1, 2017 - December 31, 2019
    Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)

    Dysarthrien sind neurologisch bedingte, erworbene Störungen des Sprechens. Dabei sind vor allem die Koordination und Ausführung der Sprechbewegungen, aber auch die Mimik betroffen. Besonders häufig tritt eine Dysarthrie nach einem Schlaganfall, Schädel-Hirn-Trauma oder bei neurologischen Erkrankungen wie Parkinson auf.

    Ähnlich wie in allen Sprechtherapien erfordert auch die Behandlung der Dysarthrie ein intensives Training. Anhaltende Effekte der Dysarthrie-Therapie stellen sich deshalb nur nach einem umfangreichen Behandlungsprogramm über mehrere Wochen hinweg ein. Bisher gibt es jedoch kaum Möglichkeiten zur Selbstkontrolle für Patienten noch therapeutische Anleitung in einem häuslichen Umfeld. Auch die Rückmeldung an Ärzte / Therapeuten über den Therapieerfolg ist eher lückenhaft.

    Das Projekt DysarTrain setzt genau hier an und will ein interaktives, digitales Therapieangebot für das Sprechtraining schaffen, damit Patienten ihre Übungen im häuslichen Umfeld durchführen können. In enger Abstimmung mit Ärzten, Therapeuten und Patienten werden zuerst die passenden Therapieinhalte zur Behandlung von Dysarthrien ausgewählt und digitalisiert. In einem zweiten Schritt wird eine Therapieplattform mit den geeigneten Kommunikations-, Interaktions- und Supervisionsfunktionen aufgebaut. Für die Durchführung des Trainings werden anschließend Assistenzfunktionen und Feedbackmechanismen entwickelt. Das Programm soll automatisch rückmelden, ob eine Übung gut absolviert wurde und was ggf. noch verbessert werden kann. Eine automatisierte Auswertung der Therapiedaten erlaubt es Ärzten und Therapeuten, die Therapieform auf möglichst einfache Weise zu individualisieren und an den jeweiligen Therapiestand anzupassen. Dieses Angebot wird mit Ärzten, Therapeuten und Patienten in den Behandlungsprozess integriert und evaluiert.

  • Verbesserte Charakterisierung des Versagensverhaltens von Blechwerkstoffen durch den Einsatz von Mustererkennungsmethoden


    (Third Party Funds Single)
    Term: April 1, 2017 - March 31, 2019
    Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)
  • Development of multi-modal, multi-scale imaging framework for the early diagnosis of breast cancer


    (FAU Funds)
    Term: March 1, 2017 - June 30, 2020

    Breast cancer is the leading cause of cancer related deaths in women, the second most common cancer worldwide. The development and progression of breast cancer is a dynamic biological and evolutionary process. It involves a composite organ system, with transcriptome shaped by gene aberrations, epigenetic changes, the cellular biological context, and environmental influences. Breast cancer growth and response to treatment has a number of characteristics that are specific to the individual patient, for example the response of the immune system and the interaction with the neighboring tissue. The overall complexity of breast cancer is the main cause for the current, unsatisfying understanding of its development and the patient’s therapy response. Although recent precision medicine approaches, including genomic characterization and immunotherapies, have shown clear improvements with regard to prognosis, the right treatment of this disease remains a serious challenge. The vision of the BIG-THERA team is to improve individualized breast cancer diagnostics and therapy, with the ultimate goal of extending the life expectancy of breast cancer sufferers. Our primary contribution in this regard is developing a multi-modal, multi-scale framework for the early diagnosis of the molecular sub-types of breast cancer, in a manner that supplements the clinical diagnostic workflow and enables the early identification of patients compatible with specific immunotherapeutic solutions.

  • Digital Pathology - New Approaches to the Automated Image Analysis of Histologic Slides


    (Own Funds)
    Term: since January 16, 2017

    The pathologist is still the gold standard in the diagnosis of diseases in tissue slides. Due to its human nature, the pathologist is on one side able to flexibly adapt to the high morphological and technical variability of histologic slides but of limited objectivity due to cognitive and visual traps.

    In diverse project we are applying and validating currently available tools and solutions in digital pathology but are also developing new solution in automated image analysis to complement and improve the pathologist especially in areas of quantitative image analysis.

  • Deep Learning for Multi-modal Cardiac MR Image Analysis and Quantification


    (Third Party Funds Single)
    Term: January 1, 2017 - May 1, 2020
    Funding source: Deutscher Akademischer Austauschdienst (DAAD)

    Cardiovascular diseases (CVDs) and other cardiac pathologies are the leading cause of death in Europe and the USA. Timely diagnosis and post-treatment follow-ups are imperative for improving survival rates and delivering high-quality patient care. These steps rely heavily on numerous cardiac imaging modalities, which include CT (computerized tomography), coronary angiography and cardiac MRI. Cardiac MRI is a non-invasive imaging modality used to detect and monitor cardiovascular diseases. Consequently, quantitative assessment and analysis of cardiac images is vital for diagnosis and devising suitable treatments. The reliability of quantitative metrics that characterize cardiac functions such as, myocardial deformation and ventricular ejection fraction, depends heavily on the precision of the heart chamber segmentation and quantification. In this project, we aim to investigate deep learning methods to improve the diagnosis and prognosis for CVDs,