Index

Neural Network Implementation of Reaction-Diffusion Equations for Tumor Growth Modeling Using Stochastic Differential Equations

Retrieval Augmented Generation for Medical Question Answering

Project Seminar: Reproduce Research Results

In this seminar, students will engage in reproducing state-of-the-art scientific results with two main objectives. Firstly, students will work on projects that are close to current state-of-the-art research, and secondly, they will develop essential competencies in reproducing and critically analyzing scientific results. The projects will be tailored to match each student’s interests in terms of methodology and application, while the task requirements and grading criteria will be standardized across the board. The outcome of this project will contribute to the scientific community by providing a report on the state of reproducibility within the field.

The seminar will begin with a series of lectures. Students will initially evaluate publications from leading conferences in the field, focusing on their reproducibility, to gather comprehensive insights and understand the challenges involved. Typically, the evaluation will concentrate on publications from top-tier international conferences, such as CVPR and MICCAI. The specific conferences of focus may change each semester and will be announced at the start of the semester.

Students will have the option to choose from varying degrees of reproduction effort, ranging from attempting to reproduce a single result from a paper to fully implementing an entire paper. Depending on the complexity of the chosen task, students may analyze one or multiple publications.

Peer feedback and exchanges within small groups will form part of the seminar, although all reproduction efforts and deliverables will be individual work.

If you are interested, please join the first lecture on 16.10.2024 at 8.15 am in lecture hall H4 (Martensstraße 1, 91058 Erlangen).

Course registration opens on October 16, 2024, and will close on October 20, 2024. The StudOn link and password will be shared during the first lecture. Registration will follow a first-come, first-served basis.

Real-World Constrained Parameter Space Analysis for Rigid Head Motion Simulation

Description

In recent years, the application of deep learning techniques to medical image analysis tasks and image quality enhancement has proven to be a useful tool. One critical area where deep learning models have shown promising results is for patient motion estimation in CT scans [1],[2].

Deep learning models highly depend on the quality and diversity of the underlying training data, but well-annotated datasets, where the patient motion throughout the whole scan is known, are sparse. This is typically overcome with the generation of synthetic data, where motion-free clinical acquisitions are corrupted with simulated patient motion by altering the relevant components in the projection matrices. In the case of head CT scans, the rigid patient motion can be parameterized by a 6DOF trajectory over all acquisition frames. This is typically done by applying a Gaussian motion or, for more complex patterns, using B-splines. However, these simulated patterns often fall short of mimicking real head motion observed in clinical settings, especially by lacking complex spatiotemporal correlations. To provide more realistic training samples it is necessary to define a real-world constrained parameter space, respecting correlations, time dependencies and anatomical boundaries. This allows for neural networks to generalize better to real-world data.

This thesis aims to perform a conclusive analysis of the parameter space of rigid (6DOF) head motion patterns, obtained from measurements with an in-house optical tracking system integrated in a C-arm CT scanner at Siemens Healthineers in Forchheim. By analyzing the spatiotemporal correlations and constraints in the 6DOF parameter space, lower-dimensional underlying structures might be uncovered. Clustering techniques can be incorporated to further reveal sub-manifolds in the 6DOF space, as well as distinguishing different classes of motion types like breathing, nodding, etc. A Variational Autoencoder (or similar) should be trained with the goal of providing annotated synthetic datasets with realistic motion patterns.

 

[1] A. Preuhs et al., “Appearance Learning for Image-Based Motion Estimation in Tomography,” in IEEE Transactions on Medical Imaging, vol. 39, no. 11, pp. 3667-3678, Nov. 2020

[2] Chen Z, Li Q, Wu D., “Estimate and compensate head motion in non-contrast head CT scans using partial angle reconstruction and deep learning,” in Medical Physics 2024; 51: 3309–3321

Design and Dataset Generation of Scanning Objects for CT Trajectory Optimization

Abstract:
This master thesis addresses the need for effective validation and optimization of computed tomography (CT) scan trajectories, crucial for industrial applications. The research focuses on designing and automating the creation of 3D scanning objects that can systematically test and verify the performance of trajectory optimization algorithms. The central research question explores how to design such objects using tools like Blender, while ensuring that these test scenarios are both efficient and scalable. A key goal is to generate a comprehensive dataset of these scanning objects, enabling the evaluation and comparison of various trajectory optimization methods.

Research Objectives:
1. Designing Scanning Objects: Establish a method for creating 3D objects in Blender that specifically target challenges faced in CT trajectory optimization, such as irregular geometries, material contrasts, and complex edge structures. These objects will serve as benchmarks for evaluating trajectory algorithms.

2. Dataset Creation for Trajectory Evaluation: One of the core deliverables of this thesis is to generate a standardized dataset of 3D objects. This dataset will enable comprehensive evaluation and comparison of different CT trajectory optimization algorithms, using metrics such as scan efficiency, image quality (measured by SSIM, PSNR), and artifact reduction.

3. Trajectory Optimization Validation: Evaluate CT trajectory optimization methods using the generated dataset. Simulate scan trajectories and validate algorithm performance based on the reconstructed image quality and optimization of scan time. Metrics such as structural similarity, noise reduction, and coverage of scan angles will be analyzed.

Leveraging Large Language Models for Scanner-Compatible CT Protocol Generation

Neural Network based classification on dynamic clouds: Integrating video analysis and time series monitoring data

Real-Time Traffic Sign Detection for Smart Data Logging

CZ_MT_Proposal_v2

Deep Learning for Geo-Referencing Historical Utility Documents With Geographical Features

Abstract:

The digitization of industries has spurred significant advancements across sectors, including utilities responsible for essential services like heating and water supply. As many utility systems developed before the digital era, they hold immense potential for optimization through digital representation. Accurate mapping of their extensive underground pipeline networks is key to improving operational efficiency. However, this digitization presents challenges, primarily because extracting geographic information from historical planning documents is difficult, as the infrastructure remains buried underground.

In this work, we propose a two-stage deep-learning framework to extract geographic information from historical utility planning records and facilitate the digital representation of utility networks. During the first stage, we frame this as a geo-location classification task, using a Convolutional Neural Network (CNN) to classify OpenStreetMap images into specific geographic regions covered by the utility network. In the second stage, we address the scarcity of annotated data by applying a style-transfer technique to historical documents containing geographic features, converting them into a format similar to OpenStreetMap images. This process enables further classification using the trained CNN. We will evaluate the method on real-world utility data.


This thesis is part of the “UtilityTwin” project.

Automatic speaker anonymization using diffusion models