Index

Leveraging data for improved contrastive loss in CXR classification

Scene detection – External

In collaboration with external collaborator.

3D detection of Abdominal Trauma in CT images using cross-attention

Aphasia Assessment with Speech and Language

Pre-requirements:

Deep learning

Pattern Recognition/Pattern Analysis

 

Ideally:

Speech and Language Understanding

 

If you are interested contact paula.andrea.perez@fau.de with the subject [SAGI-MT] Aphasia Master Thesis 

Attach your transcripts, CV and a summary of an idea (one paragraph) you have related Aphasia detection with speech and language processing

Automated Testing of LLM-driven Conversational Systems in the In-Car Domain

[MT: Pratik Raut] Advanced Techniques for Base Station Deployment Planning for Localization

Advanced Techniques for Base Station Deployment Planning for Localization

The need for accurate localization of User Equipment (UE) has grown significantly in modern wireless
communication networks. This thesis addresses the problem of optimizing Base Station (BS) placement in
complex environments to enhance localization accuracy. Traditional methods often overlook the impact of
real-world environmental features such as building geometry and user distribution, leading to suboptimal
planning decisions [1].
This research proposes a novel approach that incorporates environmental data and signal propagation
characteristics into the planning process. The methodology involves simulating realistic environments using
raytracing techniques and modeling the network using a GPU-accelerated simulation framework. The goal
is to evaluate localization performance for given layouts and suggest improved deployment strategies [1].
In addition, the work explores a reinforcement learning–based optimization framework, where an intelligent
agent iteratively refines BS positions to minimize localization error. Key factors such as Time-Of-Arrival
(TOA), channel impulse responses, and user positions are leveraged to assess and improve system
performance [1].
The outcomes of this thesis include insights into how BS configurations affect localization in urban or
obstructed areas and a systematic framework for data-driven deployment planning.

 

Main Objectives:

  •  Analyze the impact of building geometry on localization accuracy in complex deployment scenarios.
  • Compare the performance of brute force planning methods with a reinforcement learning–based
    optimization framework for BS placement.

Proposed Steps:

  • Create a 3D building map in Blender to serve as input to Sionna RT, a raytracing software.
  • Place BSs at all candidate locations and implement raytracing–based propagation simulations within
    a GPU-accelerated framework.
  • Design and train a deep reinforcement learning agent to iteratively refine BS positions, minimizing
    localization error.
  • Benchmark localization performance for both brute force and RL-optimized BS layouts.
  • Evaluate and compare deployment configurations against criteria, including positioning accuracy.

 

Reference
[1] J. AlTahmeesschi, M. Talvitie, H. LópezBenítez, H. Ahmadi, and L. Ruotsalainen, “MultiObjective Deep
Reinforcement Learning for 5G Base Station Placement to Support Localisation for Future Sustainable
Traffic,” in Proc. IEEE 97th Vehicular Technology Conference (VTC2023Spring), Florence, Italy, Jun. 2023,
pp. 1–5.

Generative Modeling of Fluence Maps for Radiotherapy Planning

Multi-Task Deep Learning for Parkinson’s Disease: Classification and Severity Estimation via Smartwatch Data

Dual Domain Swin Transformer for Sparse-View CT Reconstruction

The resolution of medical images inherently limits the diagnostic value of clinical image acquisitions. Obtaining high-resolution images through tomographic imaging modalities like Computed Tomography (CT)  requires high radiation doses, which pose health risks to living subjects.

The main focus of this thesis is to develop a unified deep learning pipeline for enhancing the spatial resolution of low-dose CT scans by refining both the sinogram (projection) domain and the reconstructed image domain. Leveraging the Swin Transformer architecture, the proposed approach aims to generate high-resolution (HR) scans with improved anatomical detail preservation, while significantly reducing radiation dose requirements.

Deep learning-based boundary segmentation for the detection of a retinal biomarker in volume-fused high resolution OCT

Some of the main causes of vision loss are eye diseases such as age-related macular degeneration (AMD), diabetic retinopathy and glaucoma. Detecting these conditions early is critical and one of the main imaging modalities used in ophthalmology is optical coherence tomography (OCT). This thesis uses high resolution OCT images acquired at the New England Eye Center, Boston, MA. Existing motion correction and image fusion methods are used to generate high-quality volumetric OCT data (Ploner et al., 2024).

Building upon this data, this master thesis includes the development of boundary segmentation for multiple retinal layers, with specific focus on the anterior boundary of the ellipsoid zone. Additionally, the segmentation will be integrated in a pipeline for automated quantification of a biomarker.

The main tasks are:
● Evaluation of a promising new architecture for boundary segmentation, with particular consideration given to the Vision Transformer (Dosovitskiy et al., 2020)
● Development and evaluation of a method for automated quantification of an eye disease biomarker based on the segmented boundaries

Special attention will be given to the following aspects:
● Label efficiency, achieved either through task-specific pretraining or by utilizing a relevant foundational model, such as those proposed by Morano et al. (2025)
● Utilization of 3D data

The resulting model will be compared with the ground truth of the held-out test set. In addition, it will be evaluated against existing U-Net based boundary regression methods, such as those from He et al. (2019) and Karbole et al. (2024). The evaluation uses common regression metrics such as mean squared error (MSE), mean absolute error (MAE) and root mean squared error (RMSE).

The aim of this thesis is to contribute a model for the segmentation of retinal layer boundaries in OCT images, laying the groundwork for the automated quantification of a biomarker for AMD. This thesis shall provide a step towards earlier diagnosis, better monitoring of disease progression and improved clinical workflows.

References
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020, October 22). An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale. arXiv.org.
He, Y., Carass, A., Liu, Y., Jedynak, B. M., Solomon, S. D., Saidha, S., Calabresi, P. A., & Prince, J. L. (2019). Fully convolutional boundary regression for retina OCT segmentation. Lecture Notes in Computer Science, 120–128.
Morano, J., Fazekas, B., Sükei, E., Fecso, R., Emre, T., Gumpinger, M., Faustmann, G., Oghbaie, M., Schmidt-Erfurth, U., & Bogunović, H. (2025, June 10). MIRAGE: Multimodal foundation model and benchmark for comprehensive retinal OCT image analysis. arXiv.org.
Karbole, W., Ploner, S. B., Won, J., Marmalidou, A., Takahashi, H., Waheed, N. K., Fujimoto, J. G., & Maier, A. (2024c). 3D deep learning-based boundary regression of an age-related retinal biomarker in high resolution OCT. In Informatik aktuell (pp. 350–355).
Ploner, S. B., Won, J., Takahashi, H., Karbole, W., Yaghy, A., Marmalidou, A., Schottenhamml, J., Waheed, N. K., Fujimoto, J. G., & Maier, A. (2024, May 5–9). A reliable, fully‑automatic pipeline for 3D motion correction and volume fusion enables investigation of smaller and lower‑contrast OCT features [Conference presentation]. Investigative Ophthalmology & Visual Science, 65(7), ARVO E‑Abstract 2794904.