Index
Content-based Image Retrieval based on compositional elements for art historical images
Absorption Image Correction in X-ray Talbot-Lau Interferometry for Reconstruction
X-ray Phase-Contrast Imaging (PCI) is an imaging technique that measures the refraction of X-rays created by an object. There are several ways to realize PCI, such as interferometric and analyzer-based methods [3]. In contrast to X-ray absorption imaging, the phase image provides high soft-tissue contrast.
The implementation by a grating-based interferometer enables measuring an X-ray absorption image, a differential phase image and a dark-field image [2, p. 192-205]. Felsner et al. proposed the integration of a Tablot-Lau Interferometer (TLI) into an existing clinical CT system [1]. Three different gratings are mounted between the X-ray tube and the detector: two in front of the object, one behind (see Fig. 1). Currently it is not possible to install gratings with a diameter of more than a few centimeters because of various reasons [1]. The consequence is that it is only possible to create a phase-contrast image for a small area.
Nevertheless, for capturing the absorption image the entire size of the detector can be used. However, the absorption image is influenced by the gratings as they induce inhomogeneous exposure of the X-ray detector.
Besides that, the intensity values change with each projection. The X-ray tube, detector and gratings are rotating around the object during the scanning process. Depending on their position, parts of the object are covered by grating G 1 for one period of the rotation but not always.
It is expected that the part of the absorption image covered by the gratings differs from the rest of the image in its intensity values. Also, a sudden change in the intensity values can be detected at the edge of the lattice. This may lead to artifacts in 3-D reconstruction.
In this work, we will investigate the anticipated artifacts in the reconstruction and implement (at least) one correction algorithm. Furthermore, the reconstruction results with and without a correction algorithm will be evaluated using simulated and/or real data.
References:
[1] L. Felsner, M. Berger, S. Kaeppler, J. Bopp, V. Ludwig, T. Weber, G. Pelzer, T. Michel, A. Maier, G. Anton, and C. Riess. Phase-sensitive region-of-interest computed tomography. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 137–144, Cham, 2018. Springer.
[2] A. Maier, S. Steidl, V. Christlein, and J. Hornegger. Medical Imaging Systems: An Introductory Guide, volume 11111. Springer, Cham, 2018.
[3] F. Pfeiffer, T. Weitkamp, O. Bunk, and C. David. Phase retrieval and differential phase-contrast imaging with low-brilliance X-ray sources. Nature Physics, 2(4):258–261, 2006.
Truncation-correction Method for X-ray Dark-field Computed Tomography
Grating-based imaging provides three types of images, an absorption, differential phase and dark-field image. The dark-field image provides structural information about the specimen at the micrometer and sub-micrometer scale. A dark-field image can be measured by a X-ray grating interferometer. For example the Talbot-Lau interferometer that consists of three gratings. Due to the small size of the gratings, truncation arises in the projection images. This becomes an issue, since it leads to artifacts in the reconstruction.
This Bachelor thesis aims to reduce truncation artifacts of dark-field reconstructions. Inspired by the method proposed by Felsner et al. [1] the truncated dark-field image will be corrected by using the information of a complete absorption image. To describe the correlation between absorption and the dark-field signal, the decomposition by Kaeppler et al. [2] will be used. The dark-field correction algorithm will be implemented in an iterative scheme and a parameter search and evaluation of the method will be conducted.
References:
[1] Lina Felsner, Martin Berger, Sebastian Kaeppler, Johannes Bopp, Veronika Ludwig, Thomas Weber, Georg Pelzer, Thilo Michel, Andreas Maier, Gisela Anton, and Christian Riess. Phase-sensitive region-of-interest computed tomography. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, pages 137–144, Cham, 2018. Springer International Publishing.
[2] Sebastian Kaeppler, Florian Bayer, Thomas Weber, Andreas Maier, Gisela Anton, Joachim Hornegger, Matthias Beckmann, Peter A. Fasching, Arndt Hartmann, Felix Heindl, Thilo Michel, Gueluemser Oezguel, Georg Pelzer, Claudia Rauh, Jens Rieger, Ruediger Schulz-Wendtland, Michael Uder, David Wachter, Evelyn Wenkel, and Christian Riess. Signal decomposition for x-ray dark-field imaging. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2014, pages 170–177, Cham, 2014. Springer International Publishing.
Automated analysis of Parkinson’s Disease on the basis of evaluation of handwriting
In this thesis current state-of-the-art methods of automatic analysis of Parkinson’s disease (PD) are tested along with new ideas of signal processing. Since there is currently no cure for PD, it is important to introduce methods for automatic monitoring and analysis. Therefore handwriting-samples of 49 healthy subjects and 75 PD patients acquired with a graphic tablet are used. Those subjects performed different drawing tasks. With a kinematic analysis
accuracies of up 77% are achieved when using one task alone and accuracies up to 86% are achieved when combining different tasks. A newly developed spectral analysis resulted in scores of up to 96% for an individual task. Combining the spectral features of a standalone task with features from different tasks or a different analysis did not lead to better results. Making predictions about the severity of the disease based on the features acquired for the bi-class problem failed. An attempt was made modeling the velocity profile of strokes with lognormal distributions and using the thereby obtained parameters for classification. Because of difficulties with the modeling of strokes with different lengths, a classification failed.
Characterizing ultraound images through breast density related features using traditional and deep learning approaches
Breast cancer is the most common cancer in women worldwide with accounting for almost a quarter of all new female cancer cases [ 1 ]. In order to improve the chances of recovery and reduce mortality, it is crucial to detect and diagnose it as early as possible. Mammography is the standard treatment when screening for breast cancer. While mammography images have an important role in cancer diagnosis, it has been shown that the sensitivity of these images decreases with a high mammographic density [2].The mammographic density (MD) refers to the amount of fibroglandular tissue in the breast in proportion to the amount of fatty tissue. MD is an established risk factor for breast cancer. The general risk for developing breast cancer is increased with a higher density. Women with a density of 25% or higher are twice as likely to develop breast cancer, with 75% even five times, compared to women with a MD of less than 5% [3]. In addition there is a possibility that in dense breast a tumor may be masked on a mammogram [2]. Therefore it is necessary to consider the breast’s density when screening for breast cancer. Several studies aimed at supporting and improving breast cancer diagnosis with computer aided systems and feature evaluation and such studies have taken the MD into consideration when evaluating mammography images [4][5].
In order to detect those tumors that are masked on mammography or support inconclusive findings, often an additional ultrasound (US) is conducted on women with high MD [6]. However US images underlie a high inter-observer variability. Computer-aided diagnosis aims to develop a method to analyze US images and support diagnosis with the aim of reducing this variability. The approach of this thesfa is to transfer and ad just the methods designed by Häberle et al. [4] used for characterizing 2-D mammographic images in order to use them on 3-D ultrasound images while only focusing on features correlating with the MD.
Additionally, more features will be generated using deep leaming, as most of the recent computer-aided diagnosis tools do not rely on traditional methods anymore. Over the last years deep learning has become the standard when working in medical imaging and several studies have shown a promising perf ormance when working with breast ultrasound images [7][8].
U sing both traditional and deep leaming methods for extracting features aims to improve the classification of possibly cancerous tissue by building a reliable set of features which characterize the MD of the patient. Furthermore, the traditional features may help to interpret those generated through deep learning approaches, in turn, the latter may help to show the benefit of using deep leaming when analyzing medical images.
This thesis will cover the following points:
• Literature review of mammographic density as a risk factor for breast cancer and ultrasound as an additional screening method
• Extraction and evaluation of a variety of automated features in ultrasound images using traditional and deep leaming approaches
• Analyzing the relationship of the extracted features with the mammographic density
References
] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: a cancer joumal for clinicians, vol. 68, no.6,pp.394-424,2018.[2] P. E. Freer, “Mammographie breast density: impact on breast cancer risk and implications for screening,” Radiographics : a review publication of the Radiological Society of NorthAmerica, Inc, vol. 35, no. 2, pp. 302-315, 2015.
[3] V. A. McCormack, “Breast density and parenchymal pattems as markers of breast cancer risk: A meta-analysis,” Cancer Epidemiology and Prevention Biomarkers, vol. 15, no. 6, pp. 1159-1169, 2006.
[4] L. Häberle, F. Wagner, P. A. Fasching, S. M. Jud, K. Heusinger, C. R. Loehberg, A. Hein, C. M. Bayer, C. C. Hack, M. P. Lux, K. Binder, M. Elter, C. Milnzenmayer, R. Schulz-Wendtland, M. Meier-Meitinger, B. R. Adamietz, M. Uder, M. W. Beckmann, and T. Wittenberg, “Characterizing mammographic images by using generic texture features,” Breast cancer research: BCR, vol. 14, no. 2, 2012.
[5] M. Tan, F. Aghaei, Y. Wang, and B. Zheng, “Developing a new case based computer-aided detection scheme and an adaptive cueing method to improve performance in detecting mammographic lesions,” Physics in medicine and biology, vol. 62, no. 2, pp.358-376,2017.
[6] L. Häberle, C. C. Hack, K. Heusinger, F. Wagner, S. M. Jud, M. Uder, M. W. Beckmann, R. Schulz-Wendtland, T. Wittenberg, and P. A. Fasching, “Using automated texture features to determine the probability for masking ofa tumor on mammography, but not ultrasound,” Europeanjoumal of medical research, vol. 22, no. 1, 2017.
[7] H. Tanaka, S.-W. Chiu, T. Watanabe, S. Kaoku, and T. Yamaguchi, “Computer-aided diagnosis system for breast ultrasound images using deep Ieaming,” Physics in medicine and biology, vol. 64, no. 23, 2019.
[8] M. H. Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, A. K. Davison, R. Marti, and H. Y. Moi, “Automated breast ultrasound lesions detection using convolutional neural networks,” IEEEjoumal of biomedical and health informatics, vol.22,no.4,pp. 1218-1226,2018.
CITA: An Android-based Application to Evaluate the Speech of Cochlear Implant Users
Cochlear Implants (CI) are the most suitable devices for severe and profound deafness when hearing aids do not improve sufficiently speech perception. However, CI users often present altered speech production and limited understanding even after hearing rehabilitation. People suffering from severe to profound deafness may experience different speech disorders such as decreased intelligibility, changes in terms of articulation, slower speaking rate. among others. Though hearing outcome is regularly measured after cochlear implantation, speech production quality is seldom assessed in outcome evaluations. This project aims to develop an Android application suitable to analyze the speech production and perception of CI users.
Multimodal Breast Cancer Detection using a Fusion of Ultrasound and Mammogram Features
In this thesis, we aim to investigate multi-modal fusion techniques for breast lesion malignancy detection. In clinical settings, a radiologist acquires different image sequences (mammograms, US, and MRI) to precisely identify the lesion type. Relying on one modality has the risk of missing tumors or false diagnosis. However, combining information from different modalities can improve significantly the detection rate.
For example, the evaluation of mammograms on relatively dense breasts is known to be difficult, whereas ultrasound is then used to provide the information needed for a diagnosis. In other case, ultrasound is inconclusive, while mammograms offer clarity. There have been many computer-aided detection (CAD) models proposed that use either mammograms, e.g. or sonograms. However, there are relatively few studies that consider both modalities simultaneously for breast cancer diagnostic. With having this in mind, we assume that deep neural networks can also incorporate complementary features from two domains to improve the breast cancer detection rate.
Age Estimation on Panoramic Dental X-ray Images Using Deep Learning
X-rays are widely used in diagnostic medical imaging – in this Bachelor thesis, they will be used for automatic age determination. Thereby radiographs of the jaw can provide important clues about the age of the person because dental growth is less influenced by diet and hormones than skeletal growth. Compared with histological and biochemical methods is X-ray imaging significantly faster and facile.
As dental tissue is usually very well preserved after death and remains fairly unchanged for thousands of years, its analysis is widely used in forensics. Age determination on living persons is carried out to determine whether the child has reached the age of criminal responsibility or the majority if the birth certificate is not available.
However, the accuracy of the age determination by physicians is always doubted. On aver- age, the age estimation for children and adolescents differs by about half a year and about two years in the case of particularly serious inaccurate estimates. For adults, the result is usually even less accurate. Therefore, in the context of this bachelor thesis, an attempt will be made to develop a deep learning algorithm for age estimation. Since promising results have already been achieved with Deep Learning in other areas of medical image analysis – automated solutions could support physicians in estimating the age of the patient, in order to achieve more reliable results. The neuronal networks will be trained with a data set of 12 000 panoramic dental X-rays labeled with the age of the patients in days and provided by the University Hospital Erlangen. So the aim is to develop a supervised approach. Since convolutional neural networks (CNNs) have already achieved good results in other areas of medical image analysis [4], they will also be used for this task.
Synthetic Image Rendering for Deep Learning License Plate Recognition
The recognition of license plates is usually considered a rather simple task, that a human
is perfectly capable of. However, there exist many factors (e.g. fog, rain), that can
signicantly worsen the image quality and therefore increase the diculty of recognizing
a license plate. In addition, further factors e.g. low resolution or small size of the license
plate section may increase the diculty up to a point, where even humans are unable to
identify it.
A possible approach to solve this problem is to build and train a neural network using
collected image data. In theory, this should yield a high success rate and outperform a
human. However, a huge number of images, that also fulll certain criteria, is needed in
order to reliably recognize plates in dierent situations.
That is the reason why this thesis aims at building and training a neural network, that is
based on an existing CNN [1], for recognizing license plates using training data, which is
articially created. This ensures enough images are provided, while facilitating the possibility
of adding image eects to simulate many possible situations. The needed images
can be created using Blender: It oers the option to create a 3D model of a license plate,
as well as options to simulate certain weather conditions like fog or rain, while also providing
an API to automate the creation process. This way, nearly all cases can be covered
and the described procedure maximizes the success rate of the license plate detection.
The thesis consists of the following steps:
Creating a training data set consisting of generated license plate images (Blender
Python API)
Fitting the parameters of the Deep Learning model
Evaluation of the model t on datasets with real license plate images
Literatur
[1] Benedikt Lorch, Shruti Agarwal, and Hany Farid. Forensic Reconstruction of Severely
Degraded License Plates. In Society for Imaging Science & Technology, editor,
Electronic Imaging, Jan 2019.
Learning projection matrices for marker free motion compensation in weight-bearing CT scans
The integration of known operators into neural networks has recently received more and
more attention. The theoretical proof of its benets has been described by Maier and Syben
et al. in [1, 2]. Reducing the number of trainable weights by replacing trainable layers with
known operators reduces the overall approximation error and makes it easier to interpret
the layers function. This is of special interest in the context of medical imaging, where it is
crucial to understand the eects of layers or operators on the resulting image. Several use
cases of know operators in medical imaging have been explored in the past few years [3][4][5].
An API to make such experiments easier is the PYRO-NN API by Syben et al. which comes
with several forward and backward projectors for dierent geometries as well as with helpers
such as lters [6].
Cone Beam CT (CBCT) imaging is a widely used X-Ray imaging technology which uses
a point source of X-rays and a 2D at panel detector. Using an reconstruction algorithm
such as the FDK algorithm, a complete 3D reconstruction can be estimated using just one
rotation around the patient [7]. This modality is of great use in orthopedics were so called
weight bearing CT scans image primarily knee joints underweight bearing conditions to
picture the cartilage tissue under stress. The main drawback of this modality are motion
artifacts caused by involuntary movement of the patients knee and inaccuracies in the trajectory
of the scanner. In order to correct those artifacts, the extrinsic camera parameters,
which describe the position and orientation of the object relative to the detector have to be
adjusted [8].
To get one step closer to reduce motion artifacts without additional cameras or markers, it is
of special interest to study the feasibility of training extrinsic camera parameters as part of
a reconstruction pipeline. Before we can assess an algorithm to estimate those parameters,
the general feasibility of training the extrinsic camera parameters of a projection matrix
will be studied. The patients motion will be estimated iterative using a adapted gradient
descent algorithms, known from the training of neural networks.
The Bachelor’s thesis covers the following aspects:
1. Discussing of the general idea of motion compensation in CBCT as well as an quick
overview of the PYRO-NN API and thus into known Operators in general.
2. Study feasibility to learn a projection matrix of a single forward projection:
.Assessing the ability to train single parameters
Training of translations and rotations
Attempt estimate the complete rigid motion parameters
3. Training of a simple trajectory:
Assessing the motion estimation of the back projection using the volume as
ground truth
Assessing the motion estimation using a undistorted sinogram
Estimate the trajectory only based on the distorted sinogram
4. Evaluation of the training results of the experiments and description of potential applications
of the results.
All implementations will be integrated into the PYRO-NN API [6].
References
[1] A. Maier, F. Schebesch, C. Syben, T. Wur , S. Steidl, J. Choi, and R. Fahrig, \Precision
learning: Towards use of known operators in neural networks,” in 2018 24th International
Conference on Pattern Recognition (ICPR), pp. 183{188, 2018.
[2] A. K. Maier, C. Syben, B. Stimpel, T.Wur, M. Homann, F. Schebesch, W. Fu, L. Mill,
L. Kling, and S. Christiansen, \Learning with known operators reduces maximum error
bounds,” Nature machine intelligence, vol. 1, no. 8, pp. 373{380, 2019.
[3] W. Fu, K. Breininger, R. Schaert, N. Ravikumar, T. Wur , J. G. Fujimoto, E. M.
Moult, and A. Maier, \Frangi-Net: A Neural Network Approach to Vessel Segmentation,”
in BildVerarbeitung fur die Medizin (BVM) 2018 (H. H. K. H. M.-H. C. P. T. T.
Andreas Maier, Thomas M. Deserno, ed.), (Berlin, Heidelberg), pp. 341{346, Springer
Vieweg, Berlin, Heidelberg, 2018.
[4] C. Syben, B. Stimpel, K. Breininger, T. Wur , R. Fahrig, A. Dorer, and A. Maier,
\Precision Learning: Reconstruction Filter Kernel Discretization,” in Proceedings of
the 5th International Conference on Image Formation in X-ray Computed Tomography,
pp. 386{390, 2018. UnivIS-Import:2018-09-11:Pub.2018.tech.IMMD.IMMD5.precis 0.
[5] T. Wur , F. C. Ghesu, V. Christlein, and A. Maier, \Deep learning computed tomography,”
in Medical Image Computing and Computer-Assisted Intervention – MICCAI
2016 (S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, eds.), (Cham),
pp. 432{440, Springer International Publishing, 2016.
[6] C. Syben, M. Michen, B. Stimpel, S. Seitz, S. Ploner, and A. K. Maier, \Technical note:
Pyro-nn: Python reconstruction operators in neural networks,” Medical Physics, 2019.
[7] L. Feldkamp, L. C. Davis, and J. Kress, \Practical cone-beam algorithm,” J. Opt. Soc.
Am, vol. 1, pp. 612{619, 01 1984.
[8] J. Maier, M. Nitschke, J.-H. Choi, G. Gold, R. Fahrig, B. M. Eskoer, and A. Maier,
\Inertial measurements for motion compensation in weight-bearing cone-beam ct of the
knee,” 2020.