Index
Investigating Word Class Representation in LLMs Using „Probes“
Detection of Birds and Marine Mammals in Aerial Image Sequences using Artificial Intelligence Methods
Detecting the birds and marine mammals from aerial images allows to monitor the evolution of their populations over time. As this is a tedious task, when done manually, reliable automatic methods using artificial intelligence are highly desired. This task differs from many standard object detection methods due to the high resolution of images (18 megapixels for the considered dataset) and small size of the animals (some are less than 50 square pixels). Also, changing waves and reflections on the water increase the difficulty of the task.
This thesis will focus on two main points. First, train, evaluate, and compare some standard object detection methods, such as Faster-RCNN. Second, replicate the method presented in “POLO – Point-based, multi-class animal detection”, and evaluate its performance on the considered dataset. The evaluations will also include some analysis of eventual links between accuracy and image quality (e.g., image luminosity or amount of waves). If time allows for it, tracking animals over multiple frames will be attempted.
Wind Power Forecasting through Probabilistic Machine Learning Models
Wind power is a clean, renewable energy source that is gaining popularity for electricity generation. However, because wind speed can be fluctuating, integrating large amounts of wind power into electrical grids can pose challenges to their stability and uncertainty. This project wants to solve this by making a model that can predict many possible outcomes. The primary goal of this project is to develop and evaluate various ML models for forecasting wind power generation over different time frames. Utilizing weather data, including wind speed and power output from wind farms, the project seeks to identify important features necessary for making both short-term and long-term forecasts.
Objectives
● To train data on different machine learning models that predict many possible outcomes for wind power.
● Perform data analysis and identify the features that are important for forecasting of wind power
● To evaluate different ML models to see which models provide the best forecasting for the wind power.
● To forecast the wind power generation for short-term and long-term durations.
● Compare the short -term and long-term forecasting and investigate which features are weighted in both durations.
● To what extent the forecasting influences the effectiveness of different ML techniques on various data sources
DataSet : https://data.open-power-system-data.org/time_series/
● Data Collection: Collect past weather data like wind speed and direction, along with how much power wind farms produced.
● Data Preprocessing: data will undergo cleaning to address missing values, outliers and normalisation.
● Model Development:
1. Use techniques like Neural Networks to start making the models.
2. Long Short-Term Memory (LSTM) and Temporal Fusion Transformers (TFT) models are well-suited for forecasting tasks like probabilistic wind and climate power prediction for short-term horizons.
3. Combine several models to get better predictions.
● Model Training and Validation: Train the models with wind power temporal data
● Performance Evaluation: Check how good the models are forecasting using specific scores that tell us how accurate the predictions are. Eg RMSE: Root Mean Square Error, CRPS: Continuous Ranked Probability Score, Cross Validation.
Large Language Models for Knowledge Management in Engineering Projects
Identification of failure detection patterns in log files of Computer Tomography systems
Differentially Private Federated Learning for Multilabel Classification of Chest Radiographs
Data Augmentation for Artwork Object Detection via Latent Diffusion Models
Masterarbeit_proposal_DA_2310Enhancing Retrieval-Augmented Generation Systems with Fine-Tuned Language Models for Dynamic Technical Documentation
Generation of IEC 61131-3 SFCs conditioned on textual user intents and existing sequences
Real-World Constrained Parameter Space Analysis for Rigid Head Motion Simulation
Description
In recent years, the application of deep learning techniques to medical image analysis tasks and image quality enhancement has proven to be a useful tool. One critical area where deep learning models have shown promising results is for patient motion estimation in CT scans [1],[2].
Deep learning models highly depend on the quality and diversity of the underlying training data, but well-annotated datasets, where the patient motion throughout the whole scan is known, are sparse. This is typically overcome with the generation of synthetic data, where motion-free clinical acquisitions are corrupted with simulated patient motion by altering the relevant components in the projection matrices. In the case of head CT scans, the rigid patient motion can be parameterized by a 6DOF trajectory over all acquisition frames. This is typically done by applying a Gaussian motion or, for more complex patterns, using B-splines. However, these simulated patterns often fall short of mimicking real head motion observed in clinical settings, especially by lacking complex spatiotemporal correlations. To provide more realistic training samples it is necessary to define a real-world constrained parameter space, respecting correlations, time dependencies and anatomical boundaries. This allows for neural networks to generalize better to real-world data.
This thesis aims to perform a conclusive analysis of the parameter space of rigid (6DOF) head motion patterns, obtained from measurements with an in-house optical tracking system integrated in a C-arm CT scanner at Siemens Healthineers in Forchheim. By analyzing the spatiotemporal correlations and constraints in the 6DOF parameter space, lower-dimensional underlying structures might be uncovered. Clustering techniques can be incorporated to further reveal sub-manifolds in the 6DOF space, as well as distinguishing different classes of motion types like breathing, nodding, etc. A Variational Autoencoder (or similar) should be trained with the goal of providing annotated synthetic datasets with realistic motion patterns.
[1] A. Preuhs et al., “Appearance Learning for Image-Based Motion Estimation in Tomography,” in IEEE Transactions on Medical Imaging, vol. 39, no. 11, pp. 3667-3678, Nov. 2020
[2] Chen Z, Li Q, Wu D., “Estimate and compensate head motion in non-contrast head CT scans using partial angle reconstruction and deep learning,” in Medical Physics 2024; 51: 3309–3321