This is a project only (10 ECTS) focused on reproducible reinforcement learning and paper-driven implementation.
You will re-implement an IEEE-published RL method [1] and evaluate it on a realistic, safety-critical control problem.
The application domain is power grids, used purely as a real-world benchmark for reinforcement learning.
No prior power-systems background is required.
- Implement a Q-learning-based control method from a research paper (state/action design, reward shaping, constraints).
- Validate the implementation on a benchmark setup (reproducibility, metrics, sanity checks).
- Apply the method to real data from a 20 kV distribution grid.
- Optional: extend the same RL framework towards distance protection (concept + first prototype, time permitting).
Who should apply
- Computer science or related background.
- Good Python skills (NumPy/Pandas, Git).
- Basic knowledge of machine learning or reinforcement learning.
- Interest in implementing and evaluating methods from scientific papers.
- Able to attend the weekly in-person meeting in Erlangen (Mondays, 14:00).
Apply
Send one PDF to julian.oelhaf@fau.de with the subject:
"Application | Project (10 ECTS) | Reproducible RL on Power Grids | <Your Full Name>"
Email body (max. 200 words): Short motivation and your earliest state date.
Attach as one PDF: CV, transcript (dated), optional code links.
📌 Incomplete applications will not be considered.
References
[1] H. C. Kılıçkıran, B. Kekezoglu, and N. G. Paterakis, Reinforcement Learning for Optimal Protection Coordination, IEEE SEST, 2018. DOI
[2] D. Wu, X. Zheng, D. Kalathil, and L. Xie, Nested Reinforcement Learning-Based Control for Protective Relays in Power Distribution Systems, IEEE CDC, 2019. DOI
[3] D. Wu, D. Kalathil, M. M. Begovic, K. Q. Ding, and L. Xie, Deep Reinforcement Learning-Based Robust Protection in DER-Rich Distribution Grids, IEEE Open Access Journal of Power and Energy, 2022. DOI