Index

Extraction of Treatment Margins from CT Scans for Evaluation of Lung Tumor Cryoablation

Thesis Description

Among all cancer types, lung cancer is responsible for the most deaths [1]. Cryoablation is a promising minimal
invasive method for treating lung cancer [2]. During percutaneous cryoablation, one or more probes are advanced
into the lung. Subsequently, a cycle of freezing and thawing using Argon gas achieves cell death [3]. Using
computed tomography (CT) images, the radiologist plans the type, number, and placement of probes based
on the expected geometry of the ice ball forming around each probe as provided by the manufacturer and the
tumor location.
The key parameter for assessing treatment success is to compare the margin created by the ablation around
the tumor with the desired safety margin. Margins of 2-10 mm [4] are required for eradication, depending
on tumor origin and type. The minimum safety margin required for eradication depends on the extent of
microscopic tumor extension beyond the tumor visible on CT.
Determining the margin is not a straight forward task, since it requires comparing CT scans taken before the
procedure to CT scans taken weeks or months later. Also, the ice ball forming during the procedure obscures
the tumor on subsequent CT scans. So far, radiologists evaluate treatment success in a binary yes/no manner
by mentally mapping 2D slices of pre- and post-procedure CT scans to mentally calculate treatment margins.
The goal of this thesis is to build an algorithm that evaluates treatment margins objectively and quantitatively,
leveraging readily available 3D CT imaging datasets. This algorithm may facilitate the early detection
of treatment failures in ex-post quality assurance and may ultimately also help estimate margins during the
procedure (i.e. to help decide for or against the addition of a probe).
From a technical point of view, the pre and post lung cryoablation 3D CT volumes have to be aligned
(registration task), tumors and ablation zones have to be either given, i.e., manually annotated, or automatically
generated (segmentation task) to compute and visualize geometrical margins.
Similar tools [5] [6] have been developed for microwave ablation which achieves cell death with high temperatures,
where tissue distortion of the tumor and surrounding tissue due to dehydration makes registration of pre
and post lung microwave ablation CT volumes difficult [7]. During cryoablation, dehydration does not occur
and tissue distortion is not noticeable. However, breathing is still expected to cause non-rigid deformation of
the volumes. Classical registration (i.e. SimpleElastix [8]) could be combined with unsupervised deep learning
approaches (i.e. Voxelmorph [9]) to achieve the desired registration.
To automatically segment tumors and ablation zones, a small convolutional neural network (CNN) could
be trained using the difference of the pre- and post-procedure scans as prior positional information. To assure
correct and time-efficient segmentation, a quality assurance step could be introduced in which a radiologist can
correct suggested segmentations.
To calculate the geometrical margin around the tumor volume, its parallel shifted surface is constructed
using an euclidean distance transform. The volumes of the tumor and the ablation zone should be visualized
by highlighting areas violating the targeted minimum margin and indicating proximity to blood vessels which
can act as thermal sinks [10].
To analyze the connections between clinical outcomes and pre/post CT imaging, applying end-to-end deep
learning would be the most desirable. However, since the amount of both labeled and unlabeled data is very
limited (approx. 50/300), machine learning methods could be applied to medically sensible features (e.g. margins)
derived from the tumor/ablation zone geometries. Alternatively a small CNN could be trained on these
geometries directly instead of the full scans.

Summary:
1. Register CT volumes
2. Segment tumors and ablation zones
3. Calculate and visualize margins and other features
4. Investigate relationships of features to outcomes of procedure

References

[1] Amanda McIntyre and Apar Kishor Ganti. Lung cancer a global perspective. Journal of Surgical Oncology,
115(5):550–554, 2017.
[2] Constantinos T. Sofocleous, Panagiotis Sideras, Elena N. Petre, and Stephen B. Solomon. Ablation for the
management of pulmonary malignancies. American Journal of Roentgenology, 197(4), 2011.
[3] Thierry de Baere, Lambros Tselikas, David Woodrum, et al. Evaluating cryoablation of metastatic lung tumors
in patientssafety and efficacy the eclipse trialinterim analysis at 1 year. Journal of Thoracic Oncology,
10(10):1468–1474, 2015.
[4] Impact of ablative margin on local tumor progression after radiofrequency ablation for lung metastases
from colorectal carcinoma: Supplementary analysis of a phase ii trial (mlcsg-0802). Journal of Vascular
and Interventional Radiology, 2022.
[5] Marco Solbiati, Riccardo Muglia, S. Nahum Goldberg, et al. A novel software platform for volumetric
assessment of ablation completeness. International Journal of Hyperthermia, 36(1):336–342, 2019. PMID:
30729818.
[6] Raluca-Maria Sandu, Iwan Paolucci, Simeon J. S. Ruiter, et al. Volumetric quantitative ablation margins
for assessment of ablation completeness in thermal ablation of liver tumors. Frontiers in Oncology, 11,
2021.
[7] Christopher L. Brace, Teresa A. Diaz, J. Louis Hinshaw, and Fred T. Lee. Tissue contraction caused by
radiofrequency and microwave ablation: A laboratory study in liver and lung. Journal of Vascular and
Interventional Radiology, pages 1280-1286, Aug 2010.
[8] Kasper Marstal, Floris Berendsen, Marius Staring, and Stefan Klein. Simpleelastix: A user-friendly, multilingual
library for medical image registration. In Proceedings of the IEEE conference on computer vision
and pattern recognition workshops, pages 134–142, 2016.
[9] Guha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca. Voxelmorph: a
learning framework for deformable medical image registration. IEEE transactions on medical imaging,
38(8):1788–1800, 2019.
[10] P. David Sonntag, J. Louis Hinshaw, Meghan G. Lubner, Christopher L. Brace, and Fred T. Lee. Thermal
ablation of lung tumors. Surgical Oncology Clinics of North America, 20(2):369387, Aug 2011.

Detecting and Transcribing Annotations in Printed Auction Catalogs using Combined Object Detection and Handwritten Text Recognition

Image Segmentation and Detection of Imperfections for the Evaluation of Welding Seams using Neural Networks

Classification of Detector Artifacts in Angiographic Imaging using Neural Networks

Super-short Scans in Bone XRM Acquisitions

Sparse-angle CT Super Resolution using Known Operators

Convolutional LSTM for Multi-organ Segmentation on CT and MR Images in Abdominal Region

Improving Instance Localization for Object Detection Pretraining

Detecting workflow-states of an MR examination using semantic segmentation of synthetic 3D point-clouds

Future medical scanners will become more autonomous and situation-aware (scene understanding). Intelligent algorithms for scene understanding need data to be trained and if possible, this data needs to be available early in the development process. Synthetic data can be valuable for this purpose.
The data used in this thesis is generated by making use of Augmented Reality (AR) glasses and RGB-D sensors. The user of the AR glasses runs a virtual MR examination while being recorded by a system of RGB-D sensors. The system that the user operates in AR is a digital twin of a real MR scanner [1]. The user manipulates coils, cushions, headphones, and any other required accessory and interacts with an avatar of a patient. Virtual 3D point clouds are generated from the AR scene and real 3D point clouds are recorded with the RGB-D sensors. These point cloud data sets are later fused resulting in a synthetic 3D depth data set of the whole MR examination scene.
The aim of this thesis is to develop and train algorithms for scene understanding and analysis based on these synthetic data sets. Firstly, semantic segmentation of the synthetic 3D point clouds using deep learning techniques shall be applied, and scene descriptors shall be designed and developed. Goal is to detect the elements of the provided scenes (e.g. operator, patient, magnet, coils, etc.) in synthetic and
real world data. The network will be trained using synthetic 3D data (3D point clouds). Moreover, synthetic 3D data as well as real 3D data from a real system will be used for testing the approach. As a reference, the work of Nie et al [2] will be used.
This thesis shall furthermore discuss how workflow state detectors can be designed to detect the different states of an MR examination (e.g. idle, patient preparation, coil fixation) based on the results of the semantic segmentation, scene description.
Finally, a discussion about the usefulness, advantages, and challenges of synthetic data will be provided.

[1] MAGNETOM Free.Max. Siemens Healthineers; https://www.siemens-healthineers.com/magnetic-resonance-imaging/high-v-mri/magnetom-free-max

[2] Nie, Yinyu & Hou, Ji & Han, Xiaoguang & Niesner, Matthias. (2021). RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction. 4606-4616. 10.1109/CVPR46437.2021.00458.

Automatic Pathological Speech Intelligibility Assessment Using Speech Disentanglement Without Bottleneck Information