Index
Diffeomorphic MRI Image Registration using Deep Learning
State-of-the-art deformable image registration approaches achieve impressive results and are commonly used in diverse image processing applications. However, these approaches are computationally expensive even on GPUs [1] due to their requirement to solve an optimization problem for each image pair during registration [2]. Most Learning based methods either required labeled data or do not guarantee a diffeomorphic registration or deformation field reversibility [1]. Adrian V. Dalca et. al. presented an unsupervised Deep-Learning framework for diffeomorphic image registration named Voxelmorph in [1].
In this thesis the network described in [1] will be implemented and trained on Cardiac Magnetic Resonance images to build an application for fast diffeomorphic image registration. The results will be compared to state-of-the-art diffeomorphic image registration methods. Additionally the method will be evaluated by comparing segmented areas as well as landmark locations of co-registered images. Furthermore the method in [1] will be extended to a one-to-many registration method using the approach in [3] to fulfill the desire for motion estimation of anatomy of interest for increasingly available dynamic imaging data [3]. Data used in this thesis will be provided by Siemens Healthineers. The implementation will be done using a open source framework like PyTorch [4].
The thesis will include the following points:
• Literature research of the topic of state-of-the-art methods regarding diffeomorphic image registration and one to many registration
• Implementing a Neural Network for diffeomorphic image regstration and extending it to a one-to-many registration
• Comparison of the results with state-of-the-art image registration methods
[1] Balakrishnan, G., Zhao, A., Sabuncu, M. R., Guttag, J. V. & Dalca, A. V. VoxelMorph: A Learning Framework for
Deformable Medical Image Registration. CoRR abs/1809.05231. arXiv: 1809.05231. http://arxiv.org/abs/1809.05231 (2018).
[2] Ashburner, J. A fast diffeomorphic image registration algorithm. NeuroImage 38, 95 –113. ISSN: 1053-8119. http://www.sciencedirect.com/science/article/pii/S1053811907005848 (2007).
[3] Metz, C., Klein, S., Schaap, M., van Walsum, T. & Niessen, W. Nonrigid registration of dynamic medical imaging data using nD+t B-splines and a groupwise optimization approach. Medical Image Analysis 15, 238 –249. ISSN: 1361-8415. http://www.sciencedirect.com/science/article/pii/S1361841510001155 (2011).
[4] Paszke, A. et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. CoRR abs/1912.01703. arXiv: 1912.01703. http://arxiv.org/abs/1912.01703 (2019).
Content-based Image Retrieval based on compositional elements for art historical images
Absorption Image Correction in X-ray Talbot-Lau Interferometry for Reconstruction
X-ray Phase-Contrast Imaging (PCI) is an imaging technique that measures the refraction of X-rays created by an object. There are several ways to realize PCI, such as interferometric and analyzer-based methods [3]. In contrast to X-ray absorption imaging, the phase image provides high soft-tissue contrast.
The implementation by a grating-based interferometer enables measuring an X-ray absorption image, a differential phase image and a dark-field image [2, p. 192-205]. Felsner et al. proposed the integration of a Tablot-Lau Interferometer (TLI) into an existing clinical CT system [1]. Three different gratings are mounted between the X-ray tube and the detector: two in front of the object, one behind (see Fig. 1). Currently it is not possible to install gratings with a diameter of more than a few centimeters because of various reasons [1]. The consequence is that it is only possible to create a phase-contrast image for a small area.
Nevertheless, for capturing the absorption image the entire size of the detector can be used. However, the absorption image is influenced by the gratings as they induce inhomogeneous exposure of the X-ray detector.
Besides that, the intensity values change with each projection. The X-ray tube, detector and gratings are rotating around the object during the scanning process. Depending on their position, parts of the object are covered by grating G 1 for one period of the rotation but not always.
It is expected that the part of the absorption image covered by the gratings differs from the rest of the image in its intensity values. Also, a sudden change in the intensity values can be detected at the edge of the lattice. This may lead to artifacts in 3-D reconstruction.
In this work, we will investigate the anticipated artifacts in the reconstruction and implement (at least) one correction algorithm. Furthermore, the reconstruction results with and without a correction algorithm will be evaluated using simulated and/or real data.
References:
[1] L. Felsner, M. Berger, S. Kaeppler, J. Bopp, V. Ludwig, T. Weber, G. Pelzer, T. Michel, A. Maier, G. Anton, and C. Riess. Phase-sensitive region-of-interest computed tomography. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 137–144, Cham, 2018. Springer.
[2] A. Maier, S. Steidl, V. Christlein, and J. Hornegger. Medical Imaging Systems: An Introductory Guide, volume 11111. Springer, Cham, 2018.
[3] F. Pfeiffer, T. Weitkamp, O. Bunk, and C. David. Phase retrieval and differential phase-contrast imaging with low-brilliance X-ray sources. Nature Physics, 2(4):258–261, 2006.
Truncation-correction Method for X-ray Dark-field Computed Tomography
Grating-based imaging provides three types of images, an absorption, differential phase and dark-field image. The dark-field image provides structural information about the specimen at the micrometer and sub-micrometer scale. A dark-field image can be measured by a X-ray grating interferometer. For example the Talbot-Lau interferometer that consists of three gratings. Due to the small size of the gratings, truncation arises in the projection images. This becomes an issue, since it leads to artifacts in the reconstruction.
This Bachelor thesis aims to reduce truncation artifacts of dark-field reconstructions. Inspired by the method proposed by Felsner et al. [1] the truncated dark-field image will be corrected by using the information of a complete absorption image. To describe the correlation between absorption and the dark-field signal, the decomposition by Kaeppler et al. [2] will be used. The dark-field correction algorithm will be implemented in an iterative scheme and a parameter search and evaluation of the method will be conducted.
References:
[1] Lina Felsner, Martin Berger, Sebastian Kaeppler, Johannes Bopp, Veronika Ludwig, Thomas Weber, Georg Pelzer, Thilo Michel, Andreas Maier, Gisela Anton, and Christian Riess. Phase-sensitive region-of-interest computed tomography. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, pages 137–144, Cham, 2018. Springer International Publishing.
[2] Sebastian Kaeppler, Florian Bayer, Thomas Weber, Andreas Maier, Gisela Anton, Joachim Hornegger, Matthias Beckmann, Peter A. Fasching, Arndt Hartmann, Felix Heindl, Thilo Michel, Gueluemser Oezguel, Georg Pelzer, Claudia Rauh, Jens Rieger, Ruediger Schulz-Wendtland, Michael Uder, David Wachter, Evelyn Wenkel, and Christian Riess. Signal decomposition for x-ray dark-field imaging. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2014, pages 170–177, Cham, 2014. Springer International Publishing.
Helical CT Reconstruction with Bilateral Sinogram/Volume Domain Denoisers
Helical CT is the most commonly used CT scan protocol in clinical CT today. Helical CT generally applies a cone-beam scan in a spiral trajectory over the object to be scanned. The collected sinograms, and subsequently reconstructed volumes, contain some amount of noise due to fluctuations in the line integrals. Removing this noise is necessary for diagnostic image quality.
In previous research, we have developed a method, based on reinforcement learning, to denoise cone-beam CT. This method involved the use of denoisers in both the sinogram and the reconstructed image domain. The denoisers are bilateral filters with the sigma parameters tuned by a convolutional agent. The reconstruction was carried out by the FDK algorithm in the ASTRA toolbox.
Due to the lack of time, we had limited our previous research to the simpler problem of circular cone-beam CT. In this research internship, we hope to extend our method to denoise helical CT as well. Since helical CT uses cone-beam projections, we hope that our method will work out of the box without any retraining being needed.
The following tasks are to be conducted as part of this research internship:
- Develop methods to reconstruct helical CT for the given sinograms i.e. ADMM, WFBP
- Formulate and train a reinforcement learning task to train denoisers for helical CT in sinogram and volume domain
- Figure out ways to train tasks without ground truth volumes, to obtain image quality better than currently existing methods
- Train current volume based neural network solutions (GAN-3D, WGAN-VGG, CPCE3D, QAE, etc.) and compare the solutions.
Requirements:
- Knowledge of CT reconstruction techniques
- Understanding of reinforcement learning
- Experience with PyTorch for developing neural networks
- Experience with image processing. Knowledge of the ASTRA toolbox is a plus.
Investigating augmented filtering approaches towards noise removal in low dose CT
Noise removal in clinical CT is necessary to make images clearer and enhance the diagnostic quality of an image. There are several deep learning techniques designed to remove noise in CT, however, they have several thousand parameters, making the behavior difficult to comprehend. We attempt to alleviate this problem by using known denoising models to remove the noise.
Due to the non-stationary nature of the CT noise, it is natural that the image will require different noise filtering strengths at different points in the image. One way to ensure this is to tune the parameters at each point in the image. Since a ground truth cannot be established for pixelwise ideal parameter values, this task can be formulated as a reinforcement task, which maximizes the image quality. Our previous research established such an approach for the joint bilateral filter.
In this thesis, we aim to complete the following tasks:
- Develop a general reinforcement learning framework for parameter tuning problems in medical imaging.
- Experiment with different denoising models such as non-local means, and block matching 3D.
- Experiment with a parameter selection strategy to choose which parameters to include into the learning process
- Study the impact of parameter tuning on denoising, and of the denoising model on the parameter tuning and the overall image quality.
In this thesis, the AAPM Grand Challenge dataset and Mayo Clinic TCIA dataset will be used. Quality shall be measured using PSNR and SSIM, and perhaps IRQM.
Requirements:
- Some knowledge of image processing. Experience with image processing libraries is a plus
- Good knowledge of PyTorch and C++
- Understanding of CT reconstruction and CT noise
- Experience with deep Q learning
Deep Learning based Model Observers for Multi-modal Imaging
Task based measurements are needed to measure the quality of medical images. Common task based measures include the use of model observers. Model observers are used to measure the confidence that an object (eg. A tumor or another structure) is present in a particular image. Common model observers in medical imaging include the Chanellised Hotelling Observer for CT image quality, or the Non Pre Whitening Filter for SPECT image quality.
Current implementations of model observers for task based measurements are executed on phantoms. The use of phantoms makes the use of task based measurements an SKE/BKE task. However, this means that the use of task based measurements cannot be directly moved into a clinical task without prior knowledge. Moreover, multiple noise realisations of a single phantom are needed to get meaningful results from a model observer.
Deep learning has been used to replicate the behaviour of model observers. However, there is no work done on a general model observer which can work across imaging modalities. In this project, we would be interested in investigating the following:
- The possibility of using deep learning to create a ‘general’ model observer
- Cross modality performance of a deep learned model observer
- Training this model observer with zero-, one-, or few- shot learning for greater future generalisation.
We would look for someone who could support us with the following skills:
- Knowledge in Python/C++ programming
- Some knowledge on image processing. Medical image processing and DICOM standards are a plus.
- Knowledge of relevant libraries like NumPy, OpenCV, PyTorch, TensorFlow
- Experience with model observers is a plus (not strictly necessary)
Augmentation of CT Images by Variation of Non-Rigid Deformation Vector Field Amplitudes
End-to-End Gaze Estimation Network for Driver Monitoring
Automated analysis of Parkinson’s Disease on the basis of evaluation of handwriting
In this thesis current state-of-the-art methods of automatic analysis of Parkinson’s disease (PD) are tested along with new ideas of signal processing. Since there is currently no cure for PD, it is important to introduce methods for automatic monitoring and analysis. Therefore handwriting-samples of 49 healthy subjects and 75 PD patients acquired with a graphic tablet are used. Those subjects performed different drawing tasks. With a kinematic analysis
accuracies of up 77% are achieved when using one task alone and accuracies up to 86% are achieved when combining different tasks. A newly developed spectral analysis resulted in scores of up to 96% for an individual task. Combining the spectral features of a standalone task with features from different tasks or a different analysis did not lead to better results. Making predictions about the severity of the disease based on the features acquired for the bi-class problem failed. An attempt was made modeling the velocity profile of strokes with lognormal distributions and using the thereby obtained parameters for classification. Because of difficulties with the modeling of strokes with different lengths, a classification failed.