A core goal in a medical imaging pipeline is to optimize the workflow for improving the patient throughput at a radiography system. By limiting manual tasks in a workflow, we can efficiently optimize the pipeline . A common manual task in a X-ray radiography workflow is to manually rotate digital X-ray images to a canonical orientation preferred by radiologists. This has an impact on the number of patients that can be analyzed in a given period of time. A Deep Learning based detection system for the in-plane rotation of body parts in X-ray images can solve this problem, but it is still an open research topic. We identified three major challenges that such automatic systems need to address: First, in clinical routine there are up to 23 different examinations, consisting of 13 anatomies (e.g. hand, chest, etc) and 4 projection types (e.g. posterior-anterior (PA), anterior-posterior (AP), lateral, and oblique), makes this task very diverse. Second, computation time must be as small as possible and third, a high alignment accuracy with respect to the canonical orientation is needed . A simulation estimates that technologists at a medium to large sized hospital spend nearly 20 hours, or 3 working days a year, doing 70,000+ manual clicks to rotate chest images on portable x-ray machines. With an Artificial Intelligence (AI) algorithm being 99.4% accurate, it is estimated that the 19.59 hours of manual ”clicks” would be reduced to 7 minutes a year, and the 70,512 clicks to 423 clicks respectively . This shows that a deep learning based AI system has the potential to significantly improve the overall workflow in X-ray radiography.
To the best of our knowledge, this is the first work that shows to detect the in-plane rotation of the extremities of the body in x-ray images. Several methods were published on automatic orientation detection of a single anatomy, for e.g. chest [3, 4, 5]. However, most of these approaches focus on orienting the x-ray images into 4 sectors e.g. 0◦, 90◦, 180◦, 270◦and not a precise orientation prediction for the full angular range of 0◦– 360◦. Baltruschat et al. proposed a transfer learning approach with ResNet architecture for precise orientation regression in hand radiographs, achieving state-of-the-art performance with a mean absolute angle error of 2.79◦ . Luo et al. addressed the orientation correction for radiographs in PACS environment by using well-defined low-level visual features from the anatomical region with a SVM classifier, achieving 96.1% accuracy . The idea of estimating the hand orientation in probability density form by Kondo et al., solves the cyclicity problem in direct angular representation and uses multiple predictions based on different features . Kausch et al. proposed a Convolutional Neural Network (CNN) regression model that predicts 5◦of freedom pose updates directly from an initial X-ray image . Here, they used a two-step approach (coarse CNN regressor and fine CNN regressor) to detect the orientation of the anatomy.
This thesis aims to develop a framework for the detection of the in-plane rotation of the extremities of the human body in a single 2D X-ray image using deep learning algorithms. Based on this information, the image shall be subsequently rotated to a predefined orientation based on the anatomy instead of the detector orientation (with respect to the X-ray source). This is especially important with portable Wireless Fidelity (WiFi) detectors, where the original orientation of the anatomy w.r.t. the detector plane can theoretically take on any angular value. In this work, we will initially focus on hands and fingers (also partially visible hands), but other extremities can also be taken into account at later point in time. In detail, the thesis will comprise the following work items:
- Literature overview of the state-of-the-art regression models for the detection of the body part orientation
- Survey for the optimal canonical orientation of each projection of the X-ray image
- Implementation of a deep learning based method with direct learning of the orientation Comparing and evaluating the performance of the deep learning models based on specific projection vs. combined projections and specific anatomy vs. combined anatomies.
- Visualizing the features learned by the model in each approach
- Quantitative evaluation on real-world data
 Paolo Russo. Handbook of X-ray imaging: physics and technology. CRC press, 2017.
 Ivo M Baltruschat, Axel Saalbach, Mattias P Heinrich, Hannes Nickisch, and Sascha Jockel. Orientation regression in hand radiographs: a transfer learning approach. In Medical Imaging 2018: Image Processing, volume 10574, page 105741W. International Society for Optics and Photonics, 2018.
 Khaled Younis, Min Zhang, Najib Akram, German Vera, Katelyn Nye, Gireesha Rao, Gopal Avinash, and John M. Sabol. Leveraging deep learning artificial intelligence in detecting the orientation of chest x-ray images. 09 2019.
 Ewa Pietka and HK Huang. Orientation correction for chest images. Journal of Digital Imaging, 5(3):185– 189, 1992.
 Hideo Nose, Yasushi Unno, Masayuki Koike, and Junji Shiraishi. A simple method for identifying image orientation of chest radiographs by use of the center of gravity of the image. Radiological physics and technology, 5(2):207–212, 2012.
 Hui Luo and Jiebo Luo. Robust online orientation correction for radiographs in pacs environments. IEEE transactions on medical imaging, 25(10):1370–1379, 2006.
 Kazuaki Kondo, Daisuke Deguchi, and Atsushi Shimada. Hand orientation estimation in probability density form. arXiv preprint arXiv:1906.04952, 2019.
 Lisa Kausch, Sarina Thomas, Holger Kunze, Maxim Privalov, Sven Vetter, Jochen Franke, Andreas H Mahnken, Lena Maier-Hein, and Klaus Maier-Hein. Toward automatic c-arm positioning for standard projections in orthopedic surgery. International Journal of Computer Assisted Radiology and Surgery, 15(7):1095–1105, 2020.