Index

Transformers vs. Convolutional Networks for 3D segmentation in industrial CT data

The current state of the art for segmentation in industrial CT are oftentimes CNNs.
Transformer based models are sparsely used.
Therefore, this project wants to compare the semantic segmentation performance of transformers (that include global context into segmentation), pure convolutional neural networks (that use local context) and combined methods (like this one: https://doi.org/10.1186/s12911-023-02129-z) on an industrial CT dataset of shoes like in this study: https://doi.org/10.58286/27736 .

Only available as Bachelors thesis / Research Project

Understanding Odor Descriptors through Advanced NLP Models and Semantic Scores

Generation of Clinical Text Reports from Chest X-Ray Images

Latent Diffusion Model for CT Synthesis

Latent diffusion model is a successful generative model in the modern computer vision researches. Modeling the generative process as image denoising, the diffusion models can generate realistic images in high quality and shows superior ability as the GAN-based models. In medical imaging, computed tomography (CT) is a well researched imaging modality and also widely applied in clinics. In this project, we will investigate the feasibility of modern diffusion models for the task of CT synthesis.

Explainable Predictive Maintenance: Forecasting and Anomaly Detection of Diagnostic Trouble Codes for Truck Fleet Management

Abstract:

Predictive Maintenance involves monitoring a vehicle’s Diagnostic Trouble Codes (DTCs) to identify potential anomalies before they escalate into major problems, enabling maintenance teams to proactively conduct necessary repairs or maintenance and prevent critical breakdowns. 

This thesis aims to explore and compare various approaches of data analytics and machine learning methods for finding patterns and abnormalities to forecast the next DTC (with a specific emphasis on predicting Suspect Parameter Number (SPN) and Failure Mode Identifier (FMI) codes) in the sequence and using anomaly detection methods to understand how dangerous the predicted DTC is. It also aims to make the forecasted model interpretable using Explainable AI techniques for maintenance professionals to have a clear understanding of the underlying factors influencing predictions. 

The dataset is provided by Elektrobit Automotive GmbH and contains tabular time series data.

Research Objectives 

  1. Investigating strategies for enhancing predictive maintenance models through effective data pre-processing, feature selection, and handling an imbalanced dataset. 
  2. Comparing various model architectures for effective forecasting of the DTC. 
  3. Designing and evaluating anomaly detection strategies to distinguish between dangerous and non-dangerous forecasted DTC. 
  4. Assessing Explainable AI approaches in improving the explainability of forecasted DTC prediction models. 

Thesis Outline

The thesis involves the following key steps: 

  • Step 1: Literature review and theoretical framework development. 
  • Step 2: Data pre-processing, and analysis. 
  • Step 3: Design and develop model architectures for our use case. 
  • Step 4: Build Explainable AI based framework for the models. 
  • Step 5: Evaluate and compare the results of the models.
  • Step 6: Thesis writing and final presentation preparation.

Through an in-depth exploration of data analytics and machine learning, this thesis seeks to elevate predictive maintenance by investigating effective strategies, model architectures, anomaly detection, and Explainable AI for Diagnostic Trouble Codes. The theoretical framework, grounded in a comprehensive literature review, will guide the study’s key steps, leading to actionable insights for proactive vehicle maintenance.

References 

A Comparative Analysis of Loss Functions in Deep Learning-Based Inverse Problems

Introduction:

In recent years, deep learning has emerged as a transformative force in the realm of image processing, particularly in addressing inverse problems such as denoising and artifact reduction in medical imaging. This research aims to systematically investigate the impact of various loss functions on deep learning-based solutions for inverse problems, with a focus on low-dose Computed Tomography (CT) imaging.

Low-dose CT, while beneficial in reducing radiation exposure, often suffers from increased noise and artifacts, adversely affecting image quality and diagnostic reliability. Traditional denoising techniques, although effective to some extent, struggle to maintain a balance between noise reduction and the preservation of crucial image details. Deep learning, especially Convolutional Neural Networks (CNNs), has shown promising results in surpassing these traditional methods, offering enhanced image reconstruction with remarkable fidelity.

However, the choice of loss function in training deep learning models is critical and often dictates the quality of the reconstructed images. Commonly used loss functions like Mean Squared Error (MSE) or Structural Similarity Index (SSIM) have their limitations and may not always align well with human perceptual quality. This research proposes to explore and compare a variety of loss functions, including novel and hybrid formulations, to evaluate their efficacy in enhancing image quality, reducing noise, and eliminating artifacts in low-dose CT images.

 

Attention Artifact! Misalignment and artifact detection using deep learning and augmentation

MA_misalignment_detection

Developing and Evaluating Image Similarity Metrics for Enhanced Classification Performance in 2D Datasets

Work description
This thesis focuses on the development and evaluation of novel image similarity metrics tailored for 2D datasets, aiming to improve the effectiveness of classification algorithms. By integrating active learning methods, the research seeks to refine these metrics dynamically through iterative feedback and validation. The work involves extensive testing and validation across diverse 2D image datasets, ensuring robustness and applicability in varied scenarios.

The following questions should be considered:

  • What metrics can effectively quantify the variance in a training dataset?
  • How does the variance within a training set impact the neural network’s ability to generalize to new, unseen data?
  • What is the optimal balance of diversity and specificity in a training dataset to maximize NN performance?
  • How can training datasets be curated to include a beneficial level of variance without compromising the quality of the neural network’s output?
  • What methodologies can be implemented to systematically adjust the variance in training data and evaluate its impact on NN generalization?

Prerequisites
Applicants should have a solid background in machine learning and deep learning, with strong technical skills in Python and experience with PyTorch. Candidates should also possess the capability to work independently and have a keen interest in exploring the theoretical aspects of neural network training.

For your application, please send your transcript of record.

Detectability Index Reimplementation for CT Images Using PyTorch

Work description
This project focuses on reimplementing the Detectability Index for evaluating individual CT projections, with the goal of improving the performance and adaptability of existing Python-based algorithms using PyTorch. The selected candidate will delve into the current code, identify performance bottlenecks, and propose innovative solutions to optimize efficiency. The goal is to minimize package dependencies to ensure code longevity and maintainability.

The following questions should be considered:

  • How can the existing Python code be improved with PyTorch for better performance and adaptability?
  • Where do the current code’s performance bottlenecks lie, and how can these be addressed?
  • How can the usage of external packages be minimized to ensure the code’s longevity?
  • What innovative approaches can be implemented to enhance the Detectability Index calculation?
  • How can the updated algorithm be validated for effectiveness and efficiency?

 

Prerequisites
Candidates should possess strong skills in Python and PyTorch, with the ability to quickly understand and improve upon existing code. A background in computational imaging or related fields, along with a problem-solving mindset, is essential.

For your application, please send your transcript of record.

Deep Learning-Driven Approaches for Optimizing Accuracy and Inference Speed in Compact Segmentation Models on Edge Devices