Index

3D Segmentation of metal objects based on Cone-Beam CT Projection Images for Metal Artefact Removal

 

Computer Tomography (CT) imaging from intraoperative mobile C-Arms is commonly used to validate tool and implant placement during surgery. As a majority of tools and implants are composed of metal, physical effects such as beam hardening, photon scattering, and high absorption induce artefacts in the volume domain. These Metal Artefacts arise from a loss of signal in the projection images which is not accounted for in standard reconstruction algorithms. Metal Artefact Reduction (MAR) techniques rely on an accurate segmentation of the metal volume.[1], [2] This first segmentation step is commonly based on thresholding the volume domain which makes it prone to errors induced by metal artefacts. This thesis investigates an end-to-end trainable segmentation model which produces 3D-metal masks from 2D projection data of a 3D Cone Beam Scan of a Cios Spin System. The robustness against metal artefacts shall be evaluated and compared to common volume-domain metal segmentation approaches.

 

Low Dose Helical CBCT denoising by domain filtering with deep reinforcement learning improved by Neural Ordinary Differential Equations approach

In previous research, we have developed a method, based on reinforcement learning, to denoise cone-beam CT. This method involved the use of denoisers in both the sinogram and the reconstructed image domain. The denoisers are bilateral filters with the sigma parameters tuned by a convolutional agent.

Recent research has shown the use of neural ODEs to improve the speed of convergence of neural network training. Neural ODEs have been applied to tasks which can be modelled by differential equations, such as fluid mechanics. They have also been expanded to cover classical deep learning tasks, such as image segmentation.

In this thesis we aim to complete the following tasks:

  1. Experiment with different recon kernels (B40, B70 etc.) to observe the effect of sharpness dependent noise.
  2. Implement neural ODE to speed up reinforcement learning convergence, and also reduce parameter count.
  3. Implement data consistent reward to ensure correct reconstruction and data consistent denoising.
  4. Experiment with deep learned quality metrics as additional reward functions for parameter tuning

As a dataset, we will use the Mayo Clinic TCIA dataset for testing the quality of our denoising algorithms. Quality can be compared with standard dose images using PSNR and SSIM, and can be calculated reference-free using the IRQM. If time permits, we can use deep model observers to assess low contrast preservation.

Requirements:

  • Knowledge of CT reconstruction techniques. Knowledge of the ASTRA toolbox is a plus.
  • Understanding of reinforcement learning
  • Experience with PyTorch for developing neural networks
  • Experience with image processing

Interpolation of deformation field for brain-shift compensation using Gaussian Process

Brain shift is the change of the position and shape of the brain during a neurosurgical procedure due to more space after opening the skull. This intraoperative soft tissue deformation limits the use of neuroanatomical overlays that were produced prior to the surgery. Consequently, intraoperative image updates are necessary to compensate for brain shift.

Comprehensive reviews concerning different aspects of intraoperative brain shift compensation can be found in [1][2]. Recently, feature based registration frameworks using SIFT features [3] or vessel centerlines [4] has been proposed to update the preoperative image in a deformable fashion, whereas point matching algorithm such as coherent point drift [5] or hybrid mixture model [4] are used to establish point correspondences between source and target feature point set. In order to estimate a dense deformation field according to the point correspondence, B-spline [6] and Thin-plate-spline [7] interpolation techniques are commonly used.

Gaussian process [8] (GP) is a powerful machine learning tool, which has been applied for image denoising, interpolation and segmentation. In this work, we are aiming at the application of different GP kernels for brain shift compensation. Furthermore, GP-based interpolation of deformation field is compared with the state-of-the-art methods.

In detail, this thesis includes the following aspects:

  • Literature review of state-of-the-art method for brain shift compensation using feature-based algorithms
  • Literature review of state-of-the-art method for the interpolation of deformation field/vector field
  • Introduction of Gaussian Process (GP)
  • Integrate GP-based interpolation technique into feature based brain shift compensation framework
    • Estimate dense deformation field from a sparse deformation field using GP
    • Implementation of at least three different GP kernels
    • Compare the performance of GP and state-of-the-art image interpolation techniques on various dataset, including synthetic data, phantom data and clinical data, with respect to accuracy, usability and run time.

[1] Bayer, S., Maier, A., Ostermeier, M., & Fahrig, R. (2017). Intraoperative Imaging Modalities and Compensation for Brain Shift in Tumor Resection Surgery. International Journal of Biomedical Imaging, 2017 .

[2] I. J. Gerard, M. Kersten-Oertel, K. Petrecca, D. Sirhan, J. A. Hall, and D. L. Collins, “Brain shift in neuronavigation of brain tumors: a review,” Medical Image Analysis, vol. 35, pp. 403–420, 2017.

[3] Luo J. et al. (2018) A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation. In: Frangi A., Schnabel J., Davatzikos C., Alberola-López C., Fichtinger G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, vol 11073. Springer, Cham

[4] Bayer S, Zhai Z, Strumia M, Tong.  XG, Gao Y, Staring M, Stoe B, Fahrig R, Arya N, Meier. A, Ravikumar N. Registration of vascular structures using a hybrid mixture model in: International Journal of Computer Assisted Radiology and Surgery, Juni 2019

[5] Myronenko, A., Song, X.: Point set registration: Coherent point drift. IEEE Trans.Pattern. Anal. Mach. Intell.32 (12), 2262-2275 (2010)

[6] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” in IEEE Transactions on Medical Imaging, vol. 18, no. 8, pp. 712-721, Aug. 1999.

[7] F. L. Bookstein, “Principal warps: thin-plate splines and the decomposition of deformations,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 6, pp. 567-585, June 1989.

[8] C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006

Covert Channel Vulnerabilities of Online Marketplaces – Impact on Antitrust Laws

Antitrust laws (also referred to as competition laws) are developed to promote vigorous competition, and has the purpose to protect consumers from predatory business practices. The paramount objectives of antitrust law are to guarantee working mechanism of markets as well as ensure a fair competition. A prominent example of infringement of antitrust law is illegal price fixing. By definition, it is an agreement among competitors that stabilize prices or other competitive terms, therefore violating the principle of price establishing mechanism through free-market forces. A typical attribute of illegal price fixing practice is the provable communication (written or oral) between human market participants.

However, in the era of digitalization and e-commerce, the detection of this illegal practice is facing new challenges, since the price establishing mechanism is partially or fully automated (i.e., automated dynamic pricing) and the market participants are not necessarily human beings. Consequently, new technological opportunities are available to hide illegal pricing politics. One possible scenario/risk is to utilize the so-called covert channel to transfer information that facilitate the illegal price fixing practice.

A communication channel is called covert, if it is not originally designed for the communication purpose [1]. Generally, it can be categorized into two groups, namely resource and time channel. To date, it is known as one of the most challenging phenomena in the cyber security. Several publications have demonstrate the applications that use covert channel to transfer critical information [2][3]. The goal of this thesis is therefore to investigate the vulnerability of online market places with regard to illegal price fixing practices under covert channel attack. Following aspects have to be included in this work:

  • Literature review of state-of-the-art with regard to covert channel,
  • Simulate a price fixing scenario on an e-commerce market place utilizing covert channel to transfer information,
  • Comparison of covert channel and conventional communication channel,
  • Derive implications and consequences for antitrust law.

[1] Hans-Georg Eßer, Felix C. Freiling. Kapazitätsmessung eines verdeckten Zeitkanals über HTTP, Univ. Mannheim, Technischer Bericht TR-2005-10, November 2005

[2] Freiling F.C., Schinzel S. (2011) Detecting Hidden Storage Side Channel Vulnerabilities in Networked Applications. In: Camenisch J., Fischer-Hübner S., Murayama Y., Portmann A., Rieder C. (eds) Future Challenges in Security and Privacy for Academia and Industry. SEC 2011. IFIP Advances in Information and Communication Technology, vol 354. Springer, Berlin, Heidelberg.

[3] Davide B. Bartolini, Philipp Miedl, and Lothar Thiele. 2016. On the capacity of thermal covert channels in multicores. In Proceedings of the Eleventh EuroSys ’16. Association for Computing Machinery, New York, NY, USA, Article 24, 1–16.

Restoring lung CT images from photographs for AI ap- plications

Motivation: Interstitial lung diseases (ILD) describe a group of acute or chronic diseases
of the interstitium or the alveoli [1]. The diagnosis of ILD is very challenging since there are
more than 200 di erent diseases with each of them occurring only rarely. The modality of
choice for diagnosing ILD is computed tomography (CT), even though the di erent diseases
cause similar or sometimes even identical imaging signs in the lung. Therefore, the results of
the CT-scan have to be combined with additional information like the history of the patient,
the symptoms and the laboratory values [2]. Approaches to assist doctors by including
machine learning algorithms like a similar patient search (SPS) already exist [3]. The idea
is to develop an app to take a photograph of the CT-scan and process the image in order
to start a SPS. The main focus of this work will be on the processing of the photograph in
order to restore the CT-properties of the original scan.
Methods: Taking photographs of a CT-scan on a screen leads to a loss of the Houns eld
Units and introduces artifacts like moire patterns, light and mirroring artifacts and imbalanced
illumination. To restore the lung CT image from a photograph, a traditional
approach using lters in contrast to a deep learning approach will be investigated. The
new approach subtracts the screen pixel array in order to avoid moire patterns, removes
the other most critical artifacts from the photograph and restores the lung CT window by
converting the pixel values of the photograph back into Houns eld Units. The processed
photograph can then be send to the SPS tool in order to help doctors nd the right diagnosis.
The Master’s thesis covers the following aspects:
1. Identi cation of the most critical artifacts appearing in photographs
2. Investigation of traditional and deep learning based approaches for artifact reduction
3. Determination of reading room conditions
4. Determination of an adequate framework and test criteria
5. Implementation of an image processing algorithm based on a literature research and
the identi ed artifacts
6. Evaluation of the proposed method
Supervisors: Dr. Daniel Stromer, Dr. Christian Tietjen, Dr. Christoph Speier,
Dr. med. Johannes Haubold, Prof. Dr.-Ing. habil. Andreas Maier

References
[1] B. Schonhofer and M. Kreuter, \Interstitielle lungenerkrankungen,” in Referenz Inten-
sivmedizin (G. Marx, K. Zacharowski, and S. Kluge, eds.), pp. 287{293, Stuttgart: Georg
Thieme Verlag, 2020.
[2] M. Kreuter, U. Costabel, F. Herth, and D. Kirsten, eds., Seltene Lungenerkrankungen.
Berlin and Heidelberg: Springer, 2015.
[3] Siemens Healthcare GmbH, \Similar patient search: syngo.via: Va20a,” 2021.
1

Automation of flow cytometry diagnostics workflow for leukemia diagnostics by leveraging machine learning

Background: FCM – Flow cytometry is a technique for measuring the physical and chemical properties
of individual cells suspended in a fluid stream. FCM is widely used in immunology, in many clinical and
biomedical laboratories for diagnosis, subclassification and post-treatment monitoring of blood cancers or
leukemias. Generally, a single session of FCM produces multidimensional readouts of 10,000 to 1,000,000
cells with 4 to 12 parameters.
The conventional workflow of diagnostics involves visualization of the FCM dataset in a series of 2-D scatter
plots and evaluate the different characteristics of cell populations by experts. Based on the inspection, the
pathologists identify a sub-population of cells (gating) and quantifies for further analysis/diagnosis.
Motivation: However, the conventional analytic process is performed manually on a sequence of two-
dimensional scatter plots. Repeating this process on multiple data sets is very time consuming and labour-
intensive. This problem leads to different clinical decisions depending upon the individuals who perform it
and causes more challenges.
Approach: Our approach is to automatize these conventional workflows by leveraging machine learning
techniques thereby supporting the pathologists/clinicians in their daily routine or research work. The main
objective of this thesis is to focus on the identification of small amounts of residual atypical cells in patients
with leukemia (minimal residual disease – MRD) in an automated fashion.
The following is an overview of the tasks involved in the development of the project:
1. Data Selection: Finding an unsupervised algorithm to search for “islands” that contain mainly events
from the same sample, but only a few events from different samples.
2. Dimensionality Reduction Algorithms[1]: Implementing other algorithms (umap) and validating the
effect against the existing t-SNE algorithm.
3. Optimization: Performing optimization of SNE based on OptSNE algorithm [2].
4. Performing evaluation and testing
References
[1] Y. Saeys, S. Van Gassen, and B. Lambrecht, “Computational flow cytometry: Helping to make sense of
high-dimensional immunology data,” Nature Reviews Immunology, vol. 16, 06 2016.
[2] A. C. Belkina, C. O. Ciccolella, R. Anno, R. Halpert, J. Spidlen, and J. E. Snyder-
Cappione, “Automated optimized parameters for t-distributed stochastic neighbor embedding
improve visualization and allow analysis of large datasets,” bioRxiv, 2019. [Online]. Available:
https://www.biorxiv.org/content/early/2019/05/17/451690

Prostate Lesion Detection using Multi-Parametric Magnetic Resonance Imaging

Lung Nodule Classification in CT Images using Deep Learning

Development of a Fast Biomechanical Cardiac Model for the Treatment Planning of Dilated Cardiomyopathy

Automatic Deep Learning Lung Lesion Characterization with Combined Application of State-of-the-Art Transfer Learning and Image Augmentation Techniques