Index

Topology-aware Geometric Deep Learning for Labeling Major Cerebral Arteries

Thesis Description

Computed Tomography Angiography (CTA) continues to be the most widely used imaging modality to visualize
the intracranial vasculature for cerebrovascular disease diagnosis. It is commonly applied in stroke assessment
to determine the underlying cause of the condition. [3] For the success of treatment approaches like a catheter
intervention, radiologists need to quickly and precisely identify the exact location of the affected artery segments.
Streamlining the process with the help of machine vision could save precious time for the patient. A
crucial step for partly automating the stroke assessment procedure is the accurate labeling of the vessels. Of
particular interest is the Circle of Willis, that combines the main cerebral arteries in a ring-like arrangement
and provides the blood supply for all major brain regions.

There have been multiple attempts to create a reliable classifier for cerebral vasculature, with similar techniques
employed for coronary arteries. The objective is to precisely match artery segments with their anatomical
designation. In most of these methods, the input consists of a 3D CTA or MRA scan of the skull or respectively
the heart. The first type of models are convolutional neural networks (CNN), e.g. U-Net, which directly work
on the images. [1] These models often have a large number of parameters, which can make training difficult
and slow. In an effort to reduce the amount of data and separate out valuable information, other methods
extract centerline points and the radii of the arteries from the CTA images. The resulting point cloud can
then be processed using a Pointnet++. [4] However neither of these models incorporates prior knowledge of the
topological structure of the vessels. Another approach involves the construction of a graph from the centerline
points and applying a graph convolutional network (GCN). [5] Here, the bifurcations of the vessels serve as the
nodes of the graph, while the remaining points yield features of the adjacent edges that represent the segments
between two bifurcations. This model utilizes the connectivity of the arteries, but faces challenges when dealing
with incomplete or missing segments and connections, which are especially common in patients who have
suffered a stroke. In an effort to incorporate local topology information and global context of the vessel graph,
the Topology-Aware Graph Network (TaG-Net) combines a PointNet++ and a GCN. [6] It uses a PointNet++
layer to encode features for each centerline point, which are subsequently fed into a graph convolutional layer. In
the original paper every point along the centerlines serves as a vertex in the input graph. However, this results
in a high number of nodes and edges, presenting a challenge for effective message passing within the GCN layer.
It remains unclear whether reducing the complexity of the graph could potentially bring an improvement to
this method.

The overall goal of this thesis is to find a robust classifier for the accurate labeling of the main cerebral arteries.
In the first step, labels of the vessel graphs from approximately 170 CTA-Scans, that have been annotated
by a heuristic algorithm [2], need to be corrected manually. Secondly, a Pointnet++ as well as a GCN and a
TaG-Net model will be implemented as baseline methods. Furthermore modifications to the graph structure
of the sample data will be made to possibly improve the utilization of the GCN message passing capabilities.
For the graph convolutional network, this may involve employing an autoencoder to generate informative edge
features. In the case of the TaG-Net, reducing the number of vertices can be achieved by selecting only the
bifurcations as nodes and encoding the remaining points as edge features. Additionally data augmentation
techniques such as introducing missing or incomplete vessel segments, as well as adding corruption and noise
to the data, could improve the robustness of the classifier. All models will be fine-tuned and their performance
evaluated.

Summary:
1. Improvement of existing vessel segment annotation
2. Implementation and testing of baseline models from literature (PointNet++, GCN, TaG-Net)
3. Improving TaG-Net by exploiting the graph properties of the vessel trees
(a) Restructuring the vessel graph to reduce complexity
(b) Graph-specific data augmentation
4. Fine-tuning and evaluating the models

 

 

References
[1] Yi Lv, Weibin Liao, Wenjin Liu, Zhensen Chen, and Xuesong Li. A Deep-Learning-based Framework for
Automatic Segmentation and Labelling of Intracranial Artery. IEEE International Symposium on Biomedical
Imaging (ISBI), 2023.
[2] Leonhard Rist, Oliver Taubmann, Florian Thamm, Hendrik Ditt, Michael Suehling, and Andreas Maier.
Bifurcation matching for consistent cerebral vessel labeling in CTA of stroke patients. International Journal
of Computer Assisted Radiology and Surgery, 2022.
[3] Peter D. Schellinger, Gregor Richter, Martin Koehrmann, and Arnd Doerfler. Noninvasive Angiography
(Magnetic Resonance and Computed Tomography) in the Diagnosis of Ischemic Cerebrovascular Disease.
Cerebrovascular Diseases, pages 16–23, 2007.
[4] Jannik Sobisch, Ziga Bizjak, Aichi Chien, and Ziga Spiclin. Automated intracranial vessel labeling with
learning boosted by vessel connectivity, radii and spatial context. Medical Image Computing and Computer
Assisted Intervention MICCAI, 2020.
[5] Han Yang, Xingjian Zhen, Ying Chi, Lei Zhang, and Xian-Sheng Hua. CPR-GCN: Conditional
Partial-Residual Graph Convolutional Network in Automated Anatomical Labeling of Coronary Arteries.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
[6] Linlin Yao, Zhong Xue, Yiqiang Zhan, Lizhou Chen, Yuntian Chen, Bin Song, Qian Wang, Feng Shi, and
Dinggang Shen. TaG-Net: Topology-Aware Graph Network for Vessel Labeling. Imaging Systems for GI
Endoscopy, and Graphs in Biomedical Image Analysis, pages 108–117, 2022.

Similarity Learning for Writer Identification

Automated lung cancer lesions segmentation in 18F-FDG PET/CT

18F-FDG PET/CT is routinely employed as valuable clinical tool for non-invasive staging of lung cancer patients. In particular, the presence and location of tumor-harboring lesions is a key determinant of lung cancer stage, prognosis, and optimal treatment. Recently, deep learning algorithms have shown promising results for automated identification of sites suspicious for tumor in 18F-FDG PET/CT for different cancer types and have potential
to support physicians in accurate image assessment. Nevertheless, a limited per-lesion accuracy for primary tumors and lymph nodes in patients with lung cancer has been reported. The aim of this thesis is to develop a deep learning algorithm for improved automated
detection and delineation of lung cancer lesions in 18F-FDG PET/CT.
In particular, the Master’s thesis covers the following aspects:
1. Exploration of state-of-the-art deep learning architectures for automatic segmentation of lesions in lung cancer PET/CT medical images.
2. Implementation of a deep learning architecture and training using different parameters to find high-accuracy segmentation results.
3. Evaluation of the impact of different PET image quality characteristics on the performance of the deep learning algorithm by varying parameters of the PET reconstruction algorithm and simulating lower count rates.
4. Applying changes to the architecture or modify the loss to make the deep learning algorithm more robust to variations in PET image quality.
5. Comparing the performance and accuracy with other methods available in the literature.
6. Generation of artificial data with similar anatomic location and classification as originally annotated by a specialized physician (optional).

Object Consistency GAN for Object Detection Pretraining

Machine learning based analysis of parts-of-speech in EEG data

Extraction of Treatment Margins from CT Scans for Evaluation of Lung Tumor Cryoablation

Thesis Description

Among all cancer types, lung cancer is responsible for the most deaths [1]. Cryoablation is a promising minimal
invasive method for treating lung cancer [2]. During percutaneous cryoablation, one or more probes are advanced
into the lung. Subsequently, a cycle of freezing and thawing using Argon gas achieves cell death [3]. Using
computed tomography (CT) images, the radiologist plans the type, number, and placement of probes based
on the expected geometry of the ice ball forming around each probe as provided by the manufacturer and the
tumor location.
The key parameter for assessing treatment success is to compare the margin created by the ablation around
the tumor with the desired safety margin. Margins of 2-10 mm [4] are required for eradication, depending
on tumor origin and type. The minimum safety margin required for eradication depends on the extent of
microscopic tumor extension beyond the tumor visible on CT.
Determining the margin is not a straight forward task, since it requires comparing CT scans taken before the
procedure to CT scans taken weeks or months later. Also, the ice ball forming during the procedure obscures
the tumor on subsequent CT scans. So far, radiologists evaluate treatment success in a binary yes/no manner
by mentally mapping 2D slices of pre- and post-procedure CT scans to mentally calculate treatment margins.
The goal of this thesis is to build an algorithm that evaluates treatment margins objectively and quantitatively,
leveraging readily available 3D CT imaging datasets. This algorithm may facilitate the early detection
of treatment failures in ex-post quality assurance and may ultimately also help estimate margins during the
procedure (i.e. to help decide for or against the addition of a probe).
From a technical point of view, the pre and post lung cryoablation 3D CT volumes have to be aligned
(registration task), tumors and ablation zones have to be either given, i.e., manually annotated, or automatically
generated (segmentation task) to compute and visualize geometrical margins.
Similar tools [5] [6] have been developed for microwave ablation which achieves cell death with high temperatures,
where tissue distortion of the tumor and surrounding tissue due to dehydration makes registration of pre
and post lung microwave ablation CT volumes difficult [7]. During cryoablation, dehydration does not occur
and tissue distortion is not noticeable. However, breathing is still expected to cause non-rigid deformation of
the volumes. Classical registration (i.e. SimpleElastix [8]) could be combined with unsupervised deep learning
approaches (i.e. Voxelmorph [9]) to achieve the desired registration.
To automatically segment tumors and ablation zones, a small convolutional neural network (CNN) could
be trained using the difference of the pre- and post-procedure scans as prior positional information. To assure
correct and time-efficient segmentation, a quality assurance step could be introduced in which a radiologist can
correct suggested segmentations.
To calculate the geometrical margin around the tumor volume, its parallel shifted surface is constructed
using an euclidean distance transform. The volumes of the tumor and the ablation zone should be visualized
by highlighting areas violating the targeted minimum margin and indicating proximity to blood vessels which
can act as thermal sinks [10].
To analyze the connections between clinical outcomes and pre/post CT imaging, applying end-to-end deep
learning would be the most desirable. However, since the amount of both labeled and unlabeled data is very
limited (approx. 50/300), machine learning methods could be applied to medically sensible features (e.g. margins)
derived from the tumor/ablation zone geometries. Alternatively a small CNN could be trained on these
geometries directly instead of the full scans.

Summary:
1. Register CT volumes
2. Segment tumors and ablation zones
3. Calculate and visualize margins and other features
4. Investigate relationships of features to outcomes of procedure

References

[1] Amanda McIntyre and Apar Kishor Ganti. Lung cancer a global perspective. Journal of Surgical Oncology,
115(5):550–554, 2017.
[2] Constantinos T. Sofocleous, Panagiotis Sideras, Elena N. Petre, and Stephen B. Solomon. Ablation for the
management of pulmonary malignancies. American Journal of Roentgenology, 197(4), 2011.
[3] Thierry de Baere, Lambros Tselikas, David Woodrum, et al. Evaluating cryoablation of metastatic lung tumors
in patientssafety and efficacy the eclipse trialinterim analysis at 1 year. Journal of Thoracic Oncology,
10(10):1468–1474, 2015.
[4] Impact of ablative margin on local tumor progression after radiofrequency ablation for lung metastases
from colorectal carcinoma: Supplementary analysis of a phase ii trial (mlcsg-0802). Journal of Vascular
and Interventional Radiology, 2022.
[5] Marco Solbiati, Riccardo Muglia, S. Nahum Goldberg, et al. A novel software platform for volumetric
assessment of ablation completeness. International Journal of Hyperthermia, 36(1):336–342, 2019. PMID:
30729818.
[6] Raluca-Maria Sandu, Iwan Paolucci, Simeon J. S. Ruiter, et al. Volumetric quantitative ablation margins
for assessment of ablation completeness in thermal ablation of liver tumors. Frontiers in Oncology, 11,
2021.
[7] Christopher L. Brace, Teresa A. Diaz, J. Louis Hinshaw, and Fred T. Lee. Tissue contraction caused by
radiofrequency and microwave ablation: A laboratory study in liver and lung. Journal of Vascular and
Interventional Radiology, pages 1280-1286, Aug 2010.
[8] Kasper Marstal, Floris Berendsen, Marius Staring, and Stefan Klein. Simpleelastix: A user-friendly, multilingual
library for medical image registration. In Proceedings of the IEEE conference on computer vision
and pattern recognition workshops, pages 134–142, 2016.
[9] Guha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca. Voxelmorph: a
learning framework for deformable medical image registration. IEEE transactions on medical imaging,
38(8):1788–1800, 2019.
[10] P. David Sonntag, J. Louis Hinshaw, Meghan G. Lubner, Christopher L. Brace, and Fred T. Lee. Thermal
ablation of lung tumors. Surgical Oncology Clinics of North America, 20(2):369387, Aug 2011.

Detecting and Transcribing Annotations in Printed Auction Catalogs using Combined Object Detection and Handwritten Text Recognition

Image Segmentation and Detection of Imperfections for the Evaluation of Welding Seams using Neural Networks

Classification of Detector Artifacts in Angiographic Imaging using Neural Networks

Improving Instance Localization for Object Detection Pretraining