Index

Deep Feature Learning and Clustering – A Fully Unsupervised Approach for Identifying Orca Communication Patterns

Generative Adversarial Networks for Speech Vocoding

Semi-supervised Feature Learning for Orca Audio Signals using a Convolutional Autoencoder

Classification of Rotator Cuff Tears in MRI using Neural Networks

Automatic solar panel recognition, fault detection and localization in thermal images

Improved image quality of Limited Angle and Sparse View SPECT using Deep Learning

During a Single Photon Emitted Computed Tomography (SPECT) the distribution of gamma-ray emitting tracers is measured using detectors which rotate stepwise around the longitudinal axis of the patient. Limited-angle acquisition occurs if not a full 360° rotation is completed and sparse-view, if the step size between each detector position is increased. Both types result in a degradation of the image quality. In this thesis a U-Net architecture, which was already successfully applied to improve the image quality of limited-angle Computed Tomography, was tested on sparse-view SPECT with an angular sampling of 9° and 18° as well as limited-angle SPECT of 240° and 180°. The used data was artificially created with a simulation of different geometric shapes and letters. After the best hyperparameters plus pre- and postprocessing steps had been found (namely the Adam optimizer with a learning rate of 0.001, the perceptual loss function and normalization during preprocessing), the U-Net was trained and tested with the aforementioned different sparse-view and limited-angle problems separately and also in some combinations. Besides a subjective visual evaluation of the image quality the structural similarity index was used as a metric. The U-Net was able to improve the image quality of most sparse-view and limited-angle SPECT images, the exception being the limited-angle SPECT with 240°. Trained in combination with the best performing sparse-view SPECT data of 18° angular sampling, the prediction for the sparse-view datasets with 9° angular sampling and limited-angle with 180°showed improved results. The results suggest, that the U-Net is able to achieve the biggest improvements in the predicted images on the datasets with the biggest underlying artefacts. During the experiments it became apparent, that the U-Net tends to predict additional artefacts in images which mainly depicting the background. The susceptibility of the U-Net regarding certain image structures was also explored by the original authors of the used network [Hua18]. They proposed an additional iterative method to tackle aforementioned problem [Hua19]. Furthermore, investigations with real patient data has to be done to evaluate the possible benefits of deep learning methods on sparse-view and limited-angle SPECT in a clinical setting.

Synthetic generation of CT image from non-attenuation corrected FDG-PET image using GANs and its application to whole-body PET/CT registration.

The primary aim of this research is to implement a Generative Adversarial Network (GAN) to synthesize CT images from non-attenuation corrected (NAC) FDG-PET images. Registration of multi-modality images (NAC-PET to CT) is a challenging problem due to variability of tissue or organ appearance. Hence, in order to reduce the variability, this work will investigate the use of GAN generated synthetic CT images to perform PET/CT registration.

Convolutional Neural Networks for multi-organ segmentation of SPECT projections

In this work we investigate the usage of deep learning techniques on SPECT data solving a multi-organ segmentation problem. We extract projections from 21 Lu-177 MELP SPECT scans and obtain the corresponding ground truth labels from the accompanied CT scans by forward-projection of 3D CT organ segmentations. We train a U-Net to predict the area of the kidney, spleen, liver, and background seen in the projection data, using a weighted dice loss between prediction and target labels to account for class imbalance.
With our method we achieved a mean dice coefficient of 72 % on the test set, encouraging us to perform further experiments using the U-Net.