Synthetic X-rays from CT volumes for deep learning

Type: MA thesis

Status: finished

Date: February 1, 2021 - August 1, 2021

Supervisors: Dr. Fieselmann, Andreas (Siemens Healthineers), Krauß, Patrick, Srikrishna Jaganathan, Karthik Shetty, Andreas Maier

X-rays are a standard imaging modality in clinical care and various artificial intelligence (AI) applications have been proposed to support clinical work with X-ray images. AI-based applications employing deep learning requires a great number of training data that must be structured and annotated with respect to the anatomical regions of interest. However, acquiring this training data is challenging due to the time intensive, error prone and expensive nature of annotating and labelling image data.  As an alternative, Computed Tomography (CT) data along with annotations generated from existing AI-software can be used to generate synthetic X-ray images with the corresponding transformed annotations [1][2].

In this master’s thesis, the use of synthetic X-rays generated from CT volumes for deep learning shall be investigated. Synthetic X-rays are a simulation of radiographic images produced through a perspective projection of the three-dimensional (CT) image volume onto a two-dimensional image plane. The application focuses mainly on orthopedic imaging, in particular spine imaging. A deep neural network is trained to identify anatomical landmarks of the vertebrae (e.g. corners or centers) using only the generated synthetic X-ray data [3][4]. This trained network is then extensively tested on unseen datasets of real X-ray images. The hypothesis is that the synthetic 2D data from CT volumes (image, annotations) can improve training a Deep Neural Network for X-ray applications. The results should be able to demonstrate if generated images can effectively be used in place of real data for training.


The thesis consists of the following milestones:

1: Create a landmark detector model (vertebral corners or center) from real spine X-ray data

2: Generate synthetic X-ray images and corresponding annotations from available CT data

3: Train the landmark detector model using only the synthetic X-rays

4: Evaluate the results generated from the two trained models


[1] B. Bier, F. Goldmann, J. Zaech, J. Fatouhi, R. Hageman, R. Grupp, M. Armand, G. Osgood, N. Navab, A. Maier & M. Unberath, “Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views”, International Journal of Computer Assisted Radiology and Surgery 14, 1463-1473 (2019)

[2] M. Unberath, J. Zaech, S.C. Lee, B. Bier, J. Fatouhi, M. Armand & N. Navab, “Deep DRR – A catalyst for machine learning in fluoroscopy-guided procedures” (2018) arXiv:1803.08606 []

[3] Khanal B., Dahal L., Adhikari P., Khanal B. (2020) Automatic Cobb Angle Detection Using Vertebra Detector and Vertebra Corners Regression. In: Cai Y., Wang L., Audette M., Zheng G., Li S. (eds) Computational Methods and Clinical Applications for Spine Imaging. CSI 2019. Lecture Notes in Computer Science, vol 11963. Springer, Cham.

[4] J. Yi, P. Wu, Q. Huang, H. Qu, D.N. Metaxas, “Vertebra-focused landmark detection for scoliosis assessment” (2020) arXiv:2001.03187 [eess.IV]