Index

Reinforcement Learning for Adaptive Protection in Power Grids

This thesis explores the use of reinforcement learning to improve protection strategies in power grids with high penetration of renewable energy. Conventional relay schemes often fail under changing fault conditions caused by inverter-based DERs. This thesis investigates how adaptive, data-driven control can overcome these challenges. A simulated environment based on DIgSILENT PowerFactory enables comparison between traditional protection and learning-based approaches.

Deep Learning for Fault Detection, Classification, and Localization in Power Systems

Tasks:

  • Design and benchmark deep learning models (CNNs, RNNs, Transformers) for fault detection, classification, and localization in high-voltage power systems.

  • Work with high-resolution time-series data (current/voltage signals from simulations).

  • Investigate advanced concepts like knowledge distillation, transfer learning, and multi-task learning.

  • Analyze robustness to data scarcity, sensor dropout, and noise.

  • (Optional) Extend the pipeline for real-time or distributed inference.

  • (Optional) Co-author a scientific paper based on your results.

Requirements:

  • Strong programming skills in PyTorch

  • Experience with training deep learning models

  • Ability to attend in-person meetings

  • Bonus: Interest in signal processing, ML robustness, or time-series analysis

Application:

Send your application with the subject
“Application Fault DL Thesis + your full name” to julian.oelhaf@fau.de and include:

  • Curriculum Vitae (CV)

  • Short motivation letter (max. one page)

  • Transcript of records

 

This topic can also be conducted as a smaller project (e.g., research or programming project) instead of a full thesis.

Reinforcement Learning for Coordinated Protection in Power Grids

Tasks:

  • Develop and evaluate reinforcement learning (RL) strategies to coordinate protection elements (e.g., circuit breakers, relays) in high-voltage transmission grids.

  • Design grid scenarios (e.g., multi-faults, communication delays, islanding) and simulate them using synthetic fault data.

  • Train RL agents to minimize fault impact and improve restoration behavior.

  • Analyze robustness under different operating conditions and topologies.

  • (Optional) Investigate hybrid RL + rule-based schemes or curriculum learning.

  • (Optional) Contribute to a research publication based on your results.

Requirements:

  • Solid experience with PyTorch

  • Experience training deep learning models

  • Ability to attend in-person meetings

  • Bonus: Background in electrical engineering, power systems, or control theory

Application:

Send your application with the subject
“Application RL Protection Thesis + your full name” to julian.oelhaf@fau.de and include:

  • Curriculum Vitae (CV)

  • Short motivation letter (max. one page)

  • Transcript of records

 

This topic can also be conducted as a smaller project (e.g., research or programming project) instead of a full thesis.

Device Detection for Improved Guidance in Minimally Invasive Interventions

Evolving Universal Datasets: Cross-Architecture Generalization via Evolutionary Distillation

The proliferation of large-scale datasets has been central to the success of modern deep learning, yet it presents significant challenges in terms of computational cost, training time, and data privacy. These issues are particularly acute in applications like Neural Architecture Search (NAS), where repeated training is time consuming. Dataset distillation offers a compelling solution by synthesizing small, information-rich datasets that act as efficient, privacy-preserving proxies for the originals. However, the practical utility of current distillation methods is severely hampered by a critical flaw: poor cross-architecture generalization. Datasets distilled for one network architecture often fail when used to train a different one, limiting their use as universal training assets.

This thesis aim to directly confront this generalization challenge by proposing a novel distillation framework based on an Evolutionary Algorithm (EA). We posit that conventional gradient-based optimization methods are prone to finding solutions overfitted to a single model’s inductive biases. In contrast, an EA can perform a more global search for a truly architecture-agnostic dataset. The core contribution of this work is a new fitness function that explicitly rewards generalization. By evaluating a candidate dataset’s performance across a diverse portfolio of architectures, our evolutionary search is driven to discover a compact dataset that captures universal features. This objective is further refined by incorporating gradient matching principles and full training epoch evaluations, ensuring the resulting dataset is not only generalizable but also effective for training robust models.

Deep Learning-based Orientation Estimation in Intraoperative X-Ray Images

A Reasoning Agent for Chest X-ray with Memory

Automated Leptomeningeal Collateral Scoring in Acute Ischemic Stroke Using Deep Learning

Implant Object Detection in Intraoperative X-Ray Images

Implant_Detection

 

Thesis Start:
October 2025 or later

Your Profile and Skills:

  • Successful completion of courses from our lab: (Advanced) Deep Learning / Pattern Recognition / Pattern Analysis
  • Proficiency in Python programming and experience with PyTorch
  • Fundamental knowledge about medical imaging and image processing
  • Strong analytical, structured, and quality-oriented working style
  • Ability to work independently while enjoying a collaborative team environment
  • Strong communication skills in English

Application:
Please send your transcript of records, CV, and a small motivation letter on why you would be interested in the topic only to joshua.scheuplein@fau.de
Note: Applications not following these requirements will not be considered!

Uncertainty Estimation on Semantic Segmentation for Microscopy Data

In microscopy, many common data analysis tasks rely on an initial semantic segmentation step. Microscopy data are very diverse, and thus this segmentation might fail due to being out-of-distribution (OOD). For users to know whether the downstream tasks are possible or accurate, it is necessary to assess the accuracy of the semantic segmentation step. This can be done through uncertainty estimation of the predictions, either at the image or pixel level. To address this, we are conducting detailed research focusing on uncertainty estimation methods across four key categories: Deterministic, Bayesian Neural Networks (BNN), Ensemble, and Test Time Augmentation (TTA). This work aims to explore both well-established and emerging methods for uncertainty estimation in semantic segmentation applied to microscopy data.