Index

Learning Slide Level Representations for Inflammatory Skin Disease Classification with Pathology Foundation Models and Graph Neural Networks

Enhancing explainability of Time Series Forecasting in Smart Infrastructures

Background:

Time-series forecasting guides decisions in finance, energy, supply chains, and healthcare. As automated systems spread, organizations need accurate predictions plus uncertainty quantification, i.e., model confidence, to enable risk-aware choices. Understanding uncertainty depends on time-series basics, aleatoric and epistemic components, and probabilistic modeling; Bayesian frameworks capture data and parameter uncertainty. These explanations help non-technical stakeholders trust forecasts, adoption, and risk-aware actions, motivating estimation and communication.

Observed Gap and Motivation: Despite rapid progress in probabilistic time series forecasting, much of the existing work focuses either on post-hoc explanations of forecasts or on generating quantile-based outputs which model only between aleatoric and epistemic uncertainty without explicitly modeling the underlying sources of uncertainty. This restricts their ability to disentangle model uncertainty from data-related variability and to explain why certain forecasts are more uncertain than others. This lack of interpretability has practical consequences and addressing this problem is therefore critical. A method that can quantify uncertainty while attributing it to meaningful drivers would enable users to understand not only how uncertain a prediction is but also why that uncertainty arises.

Research Objectives: Based on these gaps, this thesis focuses on two core objectives:
1. Investigate whether models that are traditionally deterministic, such as N-HiTS and TimesNet, can be adapted with quantile-based loss functions to provide uncertainty estimates without compromising predictive performance?
2. To develop an approach for explaining forecast uncertainty by analyzing how covariates influence the aleatoric component of the predictive distribution, while keeping the epistemic component intact.
Together, these objectives aim to advance uncertainty estimation from purely descriptive intervals toward explanations that reveal the factors driving uncertainty, enabling more interpretable and
trustworthy forecasting systems.
Outcomes: (i) A framework for practical use of traditional deterministic models for uncertainty estimation and calibration. (ii) A novel approach for inherently explaining the uncertainty of the time
series forecast based on the covariates.


This thesis is part of the “UtilityTwin” project.. The proposed work will be conducted in close collaboration with Siemens AG (Smart Infrastructure) ensuring both academic relevance and industrial applicability.

Robust Tampered Text Detection in Document Images Using Multimodal Deep Learning

The goal of this thesis is to develop a high-accuracy deep learning model for detecting tampered text in document images. This includes manipulations such as word replacement, copy-paste edits, and layout-based alterations. The focus is on building a multimodal architecture that combines visual layout features
and semantic textual content to improve detection accuracy and robustness across diverse document types and manipulation styles.

Synthetic Data Generation and Deep Learning-Based Object Detection and Segmentation for Interventional Devices in Cardiac and Neurovascular Fluoroscopy

AI-Driven Structured Reporting for Breast MRI Radiological Reports: Leveraging LLMs for Automated Label Extraction

Analyzing Methods for Efficient Language Model Adaptation with Domain-Specific Selective Layer Expansion

MetaMorph: A Unified Framework with Modular Designs for Joint Affine and Deformable Medical Image Registration

Multimodal Extraction of Lot-Level Metadata from Auction Catalogues using OCR and Vision Language Models

MasterThesis_AlishaMund

Topology-Aware Edge-Map Enhancement of Scanning Electron Microscope Images

Unsupervised Learning for Detection of Rare Driving Scenarios