Index

Generation of IEC 61131-3 SFCs conditioned on textual user intents and existing sequences

3D CT Image Visualization using Blender

Introduction:

This project aims to develop a streamlined pipeline for 3D CT images visualization using Blender and Bioxel Nodes. You’ll create a step-by-step process to import, process, and render medical imaging data, resulting in high-quality scientific visualizations. This 5 ECTS project will enhance your technical skills and ability to visualize complex medical data.

gallery

Source: https://omoolab.github.io/BioxelNodes/0.1.x/

 

Prospective candidates are warmly invited to send their CV and transcript to yipeng.sun@fau.de.

 

CBCT to CT Translation Using Deep Learning

Neural Network Implementation of Reaction-Diffusion Equations for Tumor Growth Modeling Using Stochastic Differential Equations

Enhancing Small Language Models with Retrieval-Augmented Generation for Medical Question Answering

Project Seminar: Reproduce Research Results

In this seminar, students will engage in reproducing state-of-the-art scientific results with two main objectives. Firstly, students will work on projects that are close to current state-of-the-art research, and secondly, they will develop essential competencies in reproducing and critically analyzing scientific results. The projects will be tailored to match each student’s interests in terms of methodology and application, while the task requirements and grading criteria will be standardized across the board. The outcome of this project will contribute to the scientific community by providing a report on the state of reproducibility within the field.

The seminar will begin with a series of lectures. Students will initially evaluate publications from leading conferences in the field, focusing on their reproducibility, to gather comprehensive insights and understand the challenges involved. Typically, the evaluation will concentrate on publications from top-tier international conferences, such as CVPR and MICCAI. The specific conferences of focus may change each semester and will be announced at the start of the semester.

Students will have the option to choose from varying degrees of reproduction effort, ranging from attempting to reproduce a single result from a paper to fully implementing an entire paper. Depending on the complexity of the chosen task, students may analyze one or multiple publications.

Peer feedback and exchanges within small groups will form part of the seminar, although all reproduction efforts and deliverables will be individual work.

If you are interested, please join the first lecture on 16.10.2024 at 8.15 am in lecture hall H4 (Martensstraße 1, 91058 Erlangen).

Course registration opens on October 16, 2024, and will close on October 20, 2024. The StudOn link and password will be shared during the first lecture. Registration will follow a first-come, first-served basis.

Real-World Constrained Parameter Space Analysis for Rigid Head Motion Simulation

Description

In recent years, the application of deep learning techniques to medical image analysis tasks and image quality enhancement has proven to be a useful tool. One critical area where deep learning models have shown promising results is for patient motion estimation in CT scans [1],[2].

Deep learning models highly depend on the quality and diversity of the underlying training data, but well-annotated datasets, where the patient motion throughout the whole scan is known, are sparse. This is typically overcome with the generation of synthetic data, where motion-free clinical acquisitions are corrupted with simulated patient motion by altering the relevant components in the projection matrices. In the case of head CT scans, the rigid patient motion can be parameterized by a 6DOF trajectory over all acquisition frames. This is typically done by applying a Gaussian motion or, for more complex patterns, using B-splines. However, these simulated patterns often fall short of mimicking real head motion observed in clinical settings, especially by lacking complex spatiotemporal correlations. To provide more realistic training samples it is necessary to define a real-world constrained parameter space, respecting correlations, time dependencies and anatomical boundaries. This allows for neural networks to generalize better to real-world data.

This thesis aims to perform a conclusive analysis of the parameter space of rigid (6DOF) head motion patterns, obtained from measurements with an in-house optical tracking system integrated in a C-arm CT scanner at Siemens Healthineers in Forchheim. By analyzing the spatiotemporal correlations and constraints in the 6DOF parameter space, lower-dimensional underlying structures might be uncovered. Clustering techniques can be incorporated to further reveal sub-manifolds in the 6DOF space, as well as distinguishing different classes of motion types like breathing, nodding, etc. A Variational Autoencoder (or similar) should be trained with the goal of providing annotated synthetic datasets with realistic motion patterns.

 

[1] A. Preuhs et al., “Appearance Learning for Image-Based Motion Estimation in Tomography,” in IEEE Transactions on Medical Imaging, vol. 39, no. 11, pp. 3667-3678, Nov. 2020

[2] Chen Z, Li Q, Wu D., “Estimate and compensate head motion in non-contrast head CT scans using partial angle reconstruction and deep learning,” in Medical Physics 2024; 51: 3309–3321

FVR-ADNeRF: Attention-Driven NeRFs for Few-View Reconstruction to enable CT Trajectory Optimization

Leveraging Large Language Models for Scanner-Compatible CT Protocol Generation

Dynamic Cloud Classification through Neural Networks: Integrating Video Analysis and PV Monitoring Data