Index

Alzheimer’s Disease and Depression: A Bias Analysis and Machine Learning Investigation

Alzheimer’s disease is one of the most common neurodegenerative disorders that greatly impact individual and societal levels. These patients not only suffer from dementia but also from depression which can lead to more decline in cognitive abilities. However, both AD and depression have some common symptoms that make the detection of depression in Alzheimer’s extremely challenging. But several studies have used subsets of the DementiaBank database and employed different audio embeddings to detect depressive AD patients. Nevertheless, such embeddings can be biased for non-clinical factors.

Controlled CBCT Projection Generation Using Conditional Score-Based Diffusion Models

Improving Breast Abnormality Analysis in Mammograms using CycleGAN

Thesis_Description

Deep learning for brain metastases growth prediction

Deep Learning Reconstruction for Accelerated Water-Fat Magnetic Resonance Imaging

Parallel imaging is used to reconstruct MR images from undersampled multi-channel k-space
data which enables accelerated MR imaging with a high image quality. Reconstruction
techniques aim to correct for artifacts associated with the undersampling. One widely used reconstruction
method is SENSE which uses coil sensitivity encoding. In SENSE, the image
in every channel is calculated as the product of a high-resolution image and a smooth coil
sensitivity map. The main goal of this thesis is to develop a deep learning image reconstruction based on SENSE to boost
MR Imaging and correct for aliasing in accelerated Water-Fat imaging.

Projection Domain Metal Segmentation with Epipolar Consistency using Known Operator Learning

Implementation of an automated optical inspection (AOI) system for the automatic visual inspection of an enclosure assy DC distribution

The aim of this master’s thesis is the development of an effective automatic optical inspection for the so-called enclosure Assy, which is assembled and controlled at the Medical Electronics department of Siemens Healthcare in Erlangen, Germany. This AOI system should not only contribute to digitizing production but also provide relief and support for production employees.

Evaluation of a Pixel-wise Regression Model Solving a Segmentation Task and a Deep Learning Model with the Matthew’s Correlation Coefficient as an Early Stopping Criterion

Evaluation of a Pixel-wise Regression Model Solving a Segmentation Task and a Deep Learning Model with the Matthew’s Correlation Coefficient as an Early Stopping Criterion

With global sea level rising and mass loss of polar ice sheets as the main cause, it becomes increasingly important to enchance ice dynamics modeling. A very fundamental information for this is the calving front position (CFP) of glaciers. Traditionally the delineating of the CFP has been done manually, which is a very subjective, tedious and expensive task. Since then, there has been a lot of development in automating this process. Gourmelon et al. [1] introduce the first publicly available benchmark dataset for calving front delineation on synthetic aperture radar (SAR) imagery dubbed CaFFe. The dataset consists of the SAR imagery and two corresponding labels: one showing the calving front vs the background and the other showing different landscape regions. However, for this paper we will only look at methods using the former. As there are many different approaches to calving front delineation the question of what method provides the best performance arises. Subsequently, the aim of this thesis is to evaluate the codes of the following two papers [2],[3] on the CaFFe benchmark dataset and compare their performance with the baselines provided by Gourmelon et al. [1].

 

  • paper 1: Davari et al. [2] reformulates the segmentation problem into a pixel-wise regression task by using a Convolutional Neural Network (CNN) that gets optimized to predict a distance map containing a distance value for each pixel of the image to extract the glacier calving front line with the help of a second U-net.
  • paper 2: Davari et al. [3] proposes a deep learning model with the Mathew Correlation Coefficient as an early stopping criterion to counter the extreme class imbalance of this problem.  Moreover, a distance map based binary cross-entropy (BCE) loss function gets introduced to add context about the important regions for segmentation. To make a fair and reasonable comparison, the hyperparameters of each model will be optimized on the CaFFe benchmark dataset and the model weights will be re-trained on CaFFe’s train set. The evaluation will be conducted on the provided test set and the metrics introduced in Gourmelon et al. [1] will be used for the comparison.

 

References

[1] Gourmelon, N.; Seehaus, T.; Braun, M.; Maier, A.; and Christlein, V.: Calving Fronts and Where to Find Them: A Benchmark Dataset and Methodology for Automatic Glacier Calving Front Extraction from SAR Imagery, Earth Syst. Sci. Data Discuss. [preprint]. 2022, https://doi.org/10.5194/essd-2022-139, in review.

[2] A. Davari, C. Baller, T. Seehaus, M. Braun, A. Maier and V. Christlein, “Pixelwise Distance Regression for Glacier Calving Front Detection and Segmentation,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-10, 2022, Art no. 5224610, doi: 10.1109/TGRS.2022.3158591.

[3] A. Davari et al., “On Mathews Correlation Coefficient and Improved Distance Map Loss for Automatic Glacier Calving Front Segmentation in SAR Imagery,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-12, 2022, Art no. 5213212, doi: 10.1109/TGRS.2021.3115883.

 

Anatomical Landmark Detection for Pancreatic Vessels in Computed Tomography

Thesis Description

Pancreatic cancer remains one the most lethal forms of cancer with a five-year survival rate of approximately
6% [1]. As a consequence, clinically motivated visualizations of the vessel system around the pancreas are indispensable
to guide interventions and early diagnosis. For this purpose, computed tomography (CT) presents
a non-invasive modality that allows for the rapid, reliable and accurate visualization of the vascalature around
the pancreas [2]. However, prior to such investigations, the cumbersome manual detection of the vessels of
interest is necessary. In this work, the automated anatomical landmark detection of pancreatic vessels in CT is
of major interest. Subsequent to the detection of the start and end point of each vessel, an existing path tracing
algorithm can be initialized to determine the vessels pathway which facilitates the visual guidance for physicians.

Hitherto, existing anatomical landmark detection models predominantley rely on fully convolutional neural
networks (FCNNs). There are two main CNN-based approaches for anatomical landmark detection. First, the
landmark coordinates can be directly regressed from the input image. Novel approaches showcase high performing
landmark detection models by combining YOLO-based object detectors and ResNets for sophisticated
hierachical coordinate regression [3, 4]. However, this involves a complex image-to-coordinate mapping that
exhibits limited performance, particulary when dealing with volumetric medical image data [5]. Moreover, the
presented methods work either with 2D data or on sliced 3D data which fails to capture spatial context in
all three dimensions. Secondly, the landmarks can be retrieved from predicted segmentation heatmaps with a
subsequent post-processing of the heatmaps to coordinates [5, 6]. Thus, this approach harnesses the exceptional
successes of CNN-based image-to-image models in the medical segmentation realm. Ronneberger et al. [7] laid
the foundation for numerous U-shaped segmentation models which are also used for anatomical landmark detection
[6]. In 2021, Isensee et al. [8] introduced nnU-Net which suits as a baseline model due to its remarkable
performance and automatic configuration. Additionally, Baumgartner et al. [9] extended up-on the segmentation
framework nnU-Net and present a framework specialized for medical object detecion, named nnDetection.
However, despite the excellent performance of FCNN-based models, they fail to learn explicit global semantic
information owing to the intrinsic locality of convolutions. Consequently, Vision Transformers (ViTs) can
be employed to better capture long-range dependencies and resolve ambiguities for the anatomical landmark
detection task. The work of Tang et al. introduces Swin-UNETR [10], a ViT-based segmentation architecture
which uses Swin-Transformer modules [11] for 3D medical image data. As a result, the comparison of ViT and
CNN approaches including direct coordinate regression as well as the segmentation of landmark heatmaps for
the detection of vessels around the pancreas is of principal importance for this work.

To conclude, the overall goal of this thesis is to find a robust anatomical landmark detection model for the
start and end points of pancreatic blood vessels. Firstly, at least two state of the art segmentation models are
implemented and evaluated for the given landmark detection task. Based on the literature review, the U-shaped
FCNN approaches nnU-Net and nnDetection as well as the Swin-UNETR, which is a promising ViT-based
model, are investigated. Optionally, a direct coordinate regression model is implemented (e.g. YARLA [3]).
Then, all models are fine-tuned and extended to optimally solve the pancreatic vessel detection problem. This
could involve the integration of prior knowledge of the vasculature, enhancing the pre- and/ or post-processing,
or lastly, evolving the model architectures itself.

Summary:
1. Preprocess volumetric 3D CT data and vessel centerline annotations
2. Investigate appropriate landmark detection models for volumetric 3D medical imaging data
3. Implementation and evaluation of baseline models from literature including CNNs, ViT, coordinate regressors
(a) nnU-Net / nnDetection: U-Net based segmentation frameworks
(b) Swin-UNETR: Swin-Transformer for image encoding, CNN based decoder
(c) YARLA: YOLO + ResNet approach for anatomical landmark regression
4. Improving the baseline models by
(a) Incorporate prior knowledge (spatial configuration context, segmentation maps, ..)
(b) Combine Swin-UNETR Pre-trained encoder and Swin-Unet [12] decoder
(c) Improved post-processing of segmentated heatmaps

 

References

[1] Milena Ilic and Irena Ilic. Epidemiology of pancreatic cancer. World Journal of Gastroenterology, 22:9694–
9705, 2016.
[2] Vinit Baliyan, Khalid Shaqdan, Sandeep Hedgire, and Brian Ghoshhajra. Vascular computed tomography
angiography technique and indications. Cardiovascular Diagnosis and Therapy, 9, 8 2019.
[3] Alexander Tack, Bernhard Preim, and Stefan Zachow. Fully automated assessment of knee alignment from
full-leg X-rays employing a ”YOLOv4 and resnet landmark regression algorithm” (YARLA): Data from
the osteoarthritis initiative 4. Computer Methods and Programs in Biomedicine, 205, 2021.
[4] Mohammed A. Al-Masni, Woo-Ram Kim, Eung Yeop Kim, Young Noh, and Dong-Hyun Kim. A two
cascaded network integrating regional-based YOLO and 3D-CNN for cerebral microbleeds detection. International
Conferences of the IEEE Engineering in Medicine and Biology Society, 42:1055–1058, 6 2020.
[5] Christian Payer, Darko ˚A tern, Horst Bischof, and Martin Urschler. Integrating spatial configuration into
heatmap regression based CNNs for landmark localization. Medical Image Analysis, 54:207–219, 5 2019.
[6] Heqin Zhu, Qingsong Yao, Li Xiao, and S. Kevin Zhou. You only learn once: Universal anatomical landmark
detection. Medical Image Computing and Computer Assisted Intervention-MICCAI, 24:85–95, 9 2021.
[7] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image
segmentation. Medical Image Computing and Computer-Assisted Intervention-MICCAI, 18:234–241, 10
2015.
[8] Fabian Isensee, Paul F. Jaeger, Simon A.A. Kohl, Jens Petersen, and Klaus H. Maier-Hein. nnU-Net: A selfconfiguring
method for deep learning-based biomedical image segmentation. Nature Methods, 18:203–211,
2 2021.
[9] Michael Baumgartner, Paul F. Jaeger, Fabian Isensee, and Klaus H. Maier-Hein. nnDetection: A selfconfiguring
method for medical object detection. Medical Image Computing and Computer-Assisted
Intervention-MICCAI, pages 530–539, 6 2021.
[10] Yucheng Tang vanderbilt Tang, Dong Yang, Wenqi Li, et al. Self-supervised pre-training of swin transformers
for 3D medical image analysis. Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), pages 20730–20740, 2022.
[11] Ze Liu, Yutong Lin, Yue Cao, et al. Swin transformer: Hierarchical vision transformer using shifted
windows. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages
10012–10022, 2021.
[12] Hu Cao, Yueyue Wang, Joy Chen, et al. Swin-unet: Unet-like pure transformer for medical image segmentation.
European Conference on Computer Vision, 17:205–218, 10 2022.

Topology-aware Geometric Deep Learning for Labeling Major Cerebral Arteries

Thesis Description

Computed Tomography Angiography (CTA) continues to be the most widely used imaging modality to visualize
the intracranial vasculature for cerebrovascular disease diagnosis. It is commonly applied in stroke assessment
to determine the underlying cause of the condition. [3] For the success of treatment approaches like a catheter
intervention, radiologists need to quickly and precisely identify the exact location of the affected artery segments.
Streamlining the process with the help of machine vision could save precious time for the patient. A
crucial step for partly automating the stroke assessment procedure is the accurate labeling of the vessels. Of
particular interest is the Circle of Willis, that combines the main cerebral arteries in a ring-like arrangement
and provides the blood supply for all major brain regions.

There have been multiple attempts to create a reliable classifier for cerebral vasculature, with similar techniques
employed for coronary arteries. The objective is to precisely match artery segments with their anatomical
designation. In most of these methods, the input consists of a 3D CTA or MRA scan of the skull or respectively
the heart. The first type of models are convolutional neural networks (CNN), e.g. U-Net, which directly work
on the images. [1] These models often have a large number of parameters, which can make training difficult
and slow. In an effort to reduce the amount of data and separate out valuable information, other methods
extract centerline points and the radii of the arteries from the CTA images. The resulting point cloud can
then be processed using a Pointnet++. [4] However neither of these models incorporates prior knowledge of the
topological structure of the vessels. Another approach involves the construction of a graph from the centerline
points and applying a graph convolutional network (GCN). [5] Here, the bifurcations of the vessels serve as the
nodes of the graph, while the remaining points yield features of the adjacent edges that represent the segments
between two bifurcations. This model utilizes the connectivity of the arteries, but faces challenges when dealing
with incomplete or missing segments and connections, which are especially common in patients who have
suffered a stroke. In an effort to incorporate local topology information and global context of the vessel graph,
the Topology-Aware Graph Network (TaG-Net) combines a PointNet++ and a GCN. [6] It uses a PointNet++
layer to encode features for each centerline point, which are subsequently fed into a graph convolutional layer. In
the original paper every point along the centerlines serves as a vertex in the input graph. However, this results
in a high number of nodes and edges, presenting a challenge for effective message passing within the GCN layer.
It remains unclear whether reducing the complexity of the graph could potentially bring an improvement to
this method.

The overall goal of this thesis is to find a robust classifier for the accurate labeling of the main cerebral arteries.
In the first step, labels of the vessel graphs from approximately 170 CTA-Scans, that have been annotated
by a heuristic algorithm [2], need to be corrected manually. Secondly, a Pointnet++ as well as a GCN and a
TaG-Net model will be implemented as baseline methods. Furthermore modifications to the graph structure
of the sample data will be made to possibly improve the utilization of the GCN message passing capabilities.
For the graph convolutional network, this may involve employing an autoencoder to generate informative edge
features. In the case of the TaG-Net, reducing the number of vertices can be achieved by selecting only the
bifurcations as nodes and encoding the remaining points as edge features. Additionally data augmentation
techniques such as introducing missing or incomplete vessel segments, as well as adding corruption and noise
to the data, could improve the robustness of the classifier. All models will be fine-tuned and their performance
evaluated.

Summary:
1. Improvement of existing vessel segment annotation
2. Implementation and testing of baseline models from literature (PointNet++, GCN, TaG-Net)
3. Improving TaG-Net by exploiting the graph properties of the vessel trees
(a) Restructuring the vessel graph to reduce complexity
(b) Graph-specific data augmentation
4. Fine-tuning and evaluating the models

 

 

References
[1] Yi Lv, Weibin Liao, Wenjin Liu, Zhensen Chen, and Xuesong Li. A Deep-Learning-based Framework for
Automatic Segmentation and Labelling of Intracranial Artery. IEEE International Symposium on Biomedical
Imaging (ISBI), 2023.
[2] Leonhard Rist, Oliver Taubmann, Florian Thamm, Hendrik Ditt, Michael Suehling, and Andreas Maier.
Bifurcation matching for consistent cerebral vessel labeling in CTA of stroke patients. International Journal
of Computer Assisted Radiology and Surgery, 2022.
[3] Peter D. Schellinger, Gregor Richter, Martin Koehrmann, and Arnd Doerfler. Noninvasive Angiography
(Magnetic Resonance and Computed Tomography) in the Diagnosis of Ischemic Cerebrovascular Disease.
Cerebrovascular Diseases, pages 16–23, 2007.
[4] Jannik Sobisch, Ziga Bizjak, Aichi Chien, and Ziga Spiclin. Automated intracranial vessel labeling with
learning boosted by vessel connectivity, radii and spatial context. Medical Image Computing and Computer
Assisted Intervention MICCAI, 2020.
[5] Han Yang, Xingjian Zhen, Ying Chi, Lei Zhang, and Xian-Sheng Hua. CPR-GCN: Conditional
Partial-Residual Graph Convolutional Network in Automated Anatomical Labeling of Coronary Arteries.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
[6] Linlin Yao, Zhong Xue, Yiqiang Zhan, Lizhou Chen, Yuntian Chen, Bin Song, Qian Wang, Feng Shi, and
Dinggang Shen. TaG-Net: Topology-Aware Graph Network for Vessel Labeling. Imaging Systems for GI
Endoscopy, and Graphs in Biomedical Image Analysis, pages 108–117, 2022.