Index
Latent Space Modeling for Event Detection in Power Grid Data
This project explores how latent representations learned from raw grid waveforms can reveal underlying structure and enable early detection of abnormal events. By modeling high-frequency voltage and current signals, we aim to distinguish critical disturbances from normal behavior with minimal delay.
Report Generation in pathology using WSIs
This project focuses on developing methods for processing large-scale digital pathology datasets and extracting meaningful features from whole slide images to support automated report generation. Emphasis is placed on efficient handling of gigapixel image data and preparing it for use in vision-language models for clinical applications.
Few-Shot Adaptation of Generalist Vision Models for Gastrointestinal Medical Image Analysis
Advancing Lung Imaging Assessment in Nuclear Medicine
Master Thesis on Advancing Lung Imaging Assessment in Nuclear Medicine
Molecular Imaging of lung ventilation and perfusion allows functional assessment that is clinically useful for managing pulmonary diseases. This project focuses on developing and evaluating new methods for the advanced visualization and automated quantification of three-dimensional lung imaging in nuclear medicine.
Your profile and skills:
- You have programming proficiency with Python
- Familiarity with medical imaging and image processing is a plus
- You work analytically, in a structured and quality-conscious manner
- You are able to work independently and enjoy a collaborative team environment
- You have excellent communication skills in English
Please send your transcript of records, CV, and a small motivation letter on why you would be interested in the topic to maximilian.reymann@fau.de
Evaluating Large Language Models Using Gameplay (ClemBench)
Deep Learning-Based Prostate Cancer Grading from Whole-Slide Images
Exploring Species-level Similarity in Bayesian Stimulus Priors of Artificial Intelligent Agents
Deep Learning-based Classification of Body Regions in Intraoperative X-Ray Images
Automated Patient Positioning (MRI) using nnUNet
Diffusion Transformer for CT artifacts compensation
Computed Tomography (CT) is one of the most important modality in modern medical imaging, providing invaluable cross-sectional anatomical information crucial for diagnosis, treatment planning, and disease monitoring. Despite its widespread utility, the quality of CT images can be significantly degraded by various artifacts arising from physical limitations, patient-related factors, or system imperfections. These artifacts, manifesting as streaks, blurs, or distortions, can obscure critical diagnostic details, potentially leading to misinterpretations and compromising patient care. While traditional iterative reconstruction and early deep learning methods have offered partial solutions, they often struggle with complex artifact patterns or may introduce new inconsistencies. Recently, diffusion models have emerged as a powerful generative paradigm, demonstrating remarkable success in image synthesis and restoration tasks by progressively denoising an image from a pure noise distribution. Concurrently, Transformer architectures, with their inherent ability to capture long-range dependencies via self-attention mechanisms, have shown promise in various vision tasks. This thesis investigates the potential of Diffusion Transformer, for comprehensive CT artifact compensation. By synergizing the iterative refinement capabilities of diffusion models with the global contextual understanding of Transformers, this work aims to develop a robust framework capable of effectively mitigating a wide range of CT artifacts, thereby enhancing image quality and improving diagnostic reliability. This research explores the design, implementation, and rigorous evaluation of such a model, comparing its performance against existing state-of-the-art techniques.