Index
Disentangling Visual Attributes for Inherently Interpretable Medical Image Classification
M
Project description:
Interpretability is essential for a deep neural network approach when applied to crucial scenarios such as medical image processing. Current gradient-based [1] and counterfactual image-based [2] interpretability approaches can only provide information of where the evidence is. We also want to know what the evidence is. In this master thesis project, we will build an inherently interpretable classification method. This classifier can learn disentangled features that are semantically meaningful and, in the future, corresponding to related clinical concepts.
This project based on a previous proposed visual feature attribution method in [3]. This method can generate class relevant attribution map for a given input disease image. We will extend this method to generate class relevant shape variations and design an inherently interpretable classifier only using the disentangled features (class relevant intensity variation and shape variation). This method can be further extended by disentangling more semantically meaningful and causal independent features such as texture, shape, and background as the work in [4].
References
[1] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
[2] Cher Bass, Mariana da Silva, Carole Sudre, Petru-Daniel Tudosiu, Stephen M Smith, and Emma C Robinson. Icam: Interpretable classifi- cation via disentangled representations and feature attribution mapping. arXiv preprint arXiv:2006.08287, 2020.
[3] Christian F Baumgartner, Lisa M Koch, Kerem Can Tezcan, Jia Xi Ang, and Ender Konukoglu. Visual feature attribution using wasserstein gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8309–8319, 2018.
[4] Axel Sauer and Andreas Geiger. Counterfactual generative networks. arXiv preprint arXiv:2101.06046, 2021.
MR automated image quality assessment
Virtual contrast enhancement of breast MRI using Deep learning
Temporal Information in Glacier Front Segmentation Using a 3D Conditional Random Field
Evaluation of Different Loss Functions for Highly Unbalanced Segmentation
Network analysis of soluble factor-mediated autocrine and paracrine circuits in melanoma immunotherapy
Interpolation of ARAMIS Grids and Analysis of Numerical Stability on Deep Learning Methods
Reinforcement Learning in Finance – Add and adapt the DDQN to an existing Reinforcement Learning Framework
Spike Detection in Gradient Coils of MR Scanners using Artificial Intelligence
Introduction
Spikes, also known as herringbone artifact, are a well-known artifact in MRI imaging. They occur
when a hardware component produces an unwanted spark. Spikes are caused by malfunctioning
hardware components and lead to a degraded image quality; therefore, it is important to eliminate
their cause. A common case is gradient coils, which produce rapidly changing magnetic fields with
high amplitude. The aim of this thesis is to develop a deep-learning-based spike detection algorithm
based on multi-channel k-space data.
Methods and data
For this work anonymized clinical data in TWIX format are used provided by Siemens Healthineers.
The dataset contains more than 90 recordings from more than 15 scanners measured with a variety
of different sequences. The recordings are annotated by one expert with a binary label per slice or
partitions for 3D recordings. The label indicates whether a spike is present or not.
The goal of this thesis is to create a deep learning pipeline for the classification of the presence of
spikes. This includes comparing different preprocessing techniques and neural network architectures
(e.g., Res-blocks [1], Inception modules [2] and Dense-blocks [3]) in terms of their performance in
solving the classification task. In addition, their computational performance will also be evaluated.
Evaluation
The following aspects will be evaluated:
- Different preprocessing methods (e.g., dimensionality reduction, feature extraction, data
augmentation) will be implemented and compared w.r.t the classification performance - Different model architectures (e.g., Res-blocks, Inception modules, Dense-blocks) will be
implemented and compared w.r.t the classification performance - The classification performance will be evaluated with different metrics and the model’s
decision will be investigated with different attribution methods - The chosen architecture will be analyzed and optimized w.r.t its computational performance
References
[1] HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE
conference on computer vision and pattern recognition. 2016. S. 770-778.
[2] SZEGEDY, Christian, et al. Going deeper with convolutions. In: Proceedings of the IEEE
conference on computer vision and pattern recognition. 2015. S. 1-9.
[3] HUANG, Gao, et al. Densely connected convolutional networks. In: Proceedings of the IEEE
conference on computer vision and pattern recognition. 2017. S. 4700-4708.