Index

Tumor Detection & Classification in Breast Cancer Histology Images using Deep Neural Networks

Among females, breast cancer is one of the most frequently diagnosed cancers and the leading causes of cancer-related death both worldwide, and in more economically developed countries. Early diagnosis significantly increases treatment success, since the treatment is more difficult and uncertain when the disease is detected at advanced stages. For this purpose, proper analysis of histology images is essential. Histology is the study of the microanatomy of cells, tissues, and organs as seen through a microscope.

One of the most common type of Histology images used as the basis of contemporary cancer diagnosis for at least a century is Hematoxylin and eosin (H&E) stained breast histol- ogy microscopy images[4]. During this diagnosis procedure, trained specialists evaluate both overall and local tissue organization of the images. However, due to the large amount of data and the complexity of the images, this task becomes very time consuming and non-viable. Therefore, the development of software tools for automatic detection and diagnostic tools is a promising prospect in this field. This subject has been a rather active field of research, and thus, the automatic detection of breast cancer based on histology images is part of the ICIAR 2018 challenge on BreAst Cancer Histology (BACH) challenge. This challenge consists of two parts; classification and segmentation.

The aim of this thesis is to first design a classifier network, which can recognize types of breast cancer. Then, using another network, we will try to classify the WSIs and perform segmentation on the images. Afterwards, we want to investigate how weakly-supervised training can affect our results on both image-wise (first part) and pixel-wise labeled images (second part). For this purpose, we will start with reproducing the results of the winning paper, which is the state of the art. Then we try to build the rest on top of that.

Deep Learning-based Denoising of Mammographic Images using Physics-driven Data Augmentation

Mammography uses low-energy X-rays to screen the human breast and is used by radiologists to detect breast cancer. Due to its complexity, a radiologist needs an impeccable image quality. For this reason, the possibility of using deep learning to denoise Mammograms to help radiologists detect breast cancer more easily will be examined. In this thesis, we aim to investigate and develop different deep learning methods for mammogram denoising.
A physically motivated noise model will be simulated on the ground truth images to generate training data. Thereafter the variance stabilizing Anscombe transformation is applied to create white Gaussian noise. Using these data, different network architectures are trained and examined.  For training, a novel loss function will be designed which helps to preserve fine image details crucial for breast cancer detections.
The effectiveness of this loss function is investigated, and its performance is compared again to other state-of-the-art loss functions. It can be shown that the proposed method outperforms state of the art algorithm like BM3D for mammography denoising.  Finally, it will be shown that the network is able to remove not only simulated, but also real noise.

Solution to extend the Field of View of Computed Tomography using Deep Learning approaches

Deep learning has been successfully applied in various applications of computed tomography (CT). Due to limited
detector size and low dose requirements, the problem of data truncation is essentially present in CT. The reconstructed images from such limited field-of-view (FoV) projections suffer from cupping artifacts inside the FoV and distortion or missing of anatomical structures that are outside the FoV [1]. One practical approach to solve the data truncation problem is to apply an extrapolation technique that increases the FoV, then apply an artifact removal technique. The water cylinder extrapolation based reconstruction [2] is a promising method that estimates the projections outside the scan field-of-view (SFoV) by using the knowledge from the projections inside the SFoV. Alternatively, the linear extrapolation technique is the simplest extrapolation technique that always increases the FoV without using any prior information, however, artifacts are still visible on the reconstructed image. Recently, Fourni´e et al. [3] have proposed a deep learning based method “Deep EFoV” to extend the FoV of CT images. First, the FoV is increased by linearly extrapolating the outer channels in the sinogram space. The reconstructed image from this extended FoV sinogram produces artifacts. Finally, the U-net model is used to remove the artifacts in the reconstructed image. The reconstructed image from a neural network model might affect the anatomical structures that are inside the SFoV. To compensate this effect, a standard algorithm “HDFoV” is used where projections inside the SFoV and projections from the neural network model that are outside the FoV are merged.

The aim of the master’s thesis will be to integrate “Deep EFoV” and “HDFoV” algorithms in the C#-based proprietary
reconstruction tool “ReconCT” developed by Siemens Healthineers. The result from the integrated algorithms needs
to be compared with the result from only the “Deep EFoV” algorithm. Another goal is to evaluate and improve the
proposed deep learning model in “Deep EFoV” for the CT FoV extension. The model needs to be improved w. r. t.
tweaking architecture, adapting parameters or even using a different architecture. The dataset and software provided
by Siemens Healthineers will be used in the thesis. The final software needs to be integrated into the “ReconCT” and
has to be presented to the supervisors.

The thesis will include the following points:

• Review of the state-of-the-art method and deep learning approaches to extend the FoV
• Comparison of the proposed method “Deep EFoV” with the integrated “Deep EFoV” and “HDFoV” method
• Improvement and simplification of the proposed deep learning model in “Deep EFoV”
• Integration of the proposed model in the reconstruction tool.

 

References
[1] Y. Huang, L. Gao, A. Preuhs, and A. Maier, “Field of View Extension in Computed Tomography Using Deep
Learning Prior,” in Bildverarbeitung f¨ur die Medizin: Algorithmen – Systeme – Anwendungen, pp. 186–191,
Springer, 2020.
[2] J. Hsieh, E. Chao, J. Thibault, B. Grekowicz, A. Horst, S. McOlash, and T. J. Myers, “A novel reconstruction
algorithm to extend the CT scan field-of-view,” Medical Physics, vol. 31, no. 9, pp. 2385–2391, 2004.
[3] ´ E. Fourni´e, M. Baer-Beck, and K. Stierstorfer, “CT field of view extension using combined channels extension
and deep learning methods,” in International Conference on Medical Imaging with Deep Learning – Extended
Abstract Track, (London, United Kingdom), 08–10 Jul 2019.

Geometric Deep Learning for Multifocal Diseases

Diseases are classi ed as multifocal if they are relating to or arising from many foci. They are present in various
medical disciplines, e.g. multifocal atrial tachycardia [1], breast cancer [2] or multifocal motor neuropathy [3].
However, analyzing diseases with multiple centers brings several challenges for conventional deep learning ar-
chitectures. On a technical side, it is complex to handle a varying number of centers which have no unique
sequence. From a medical view, it is important to model structures and relationships between the foci. The grid
structure used in convolutional neural networks cannot handle non-regular neighborhoods. A suitable approach
for this task would be to convert the data into graph structures, where the nodes describe the properties of the
foci and the edges model their mutual relationships. With geometric deep learning, it is possible to learn from
graph structures. It is an emerging eld of research with many possible applications, e.g. classifying documents
in citation graphs or analyzing molecular structures [4]. There also exist several medical applications, e.g. for
analysis of parcinson’s disease [5] or artery segmentation [6]. This thesis aims to investigate the applicability of
this method for relatively small graphs coming from multifocal diseases. The networks are trained to predict
time to events of failure as a metric for the severeness of the disease. Di erent geometric layer architectures,
such as Graph-Attention-Networks [7] and Di erential Pooling [8], are investigated and compared to the per-
formance of a conventional neural network. As we aim to create explicable models, it is intended to provide
visualizations of salient sub-graphs and features of the results. In addition to that, methods to incorporate prior
knowledge from the medical domain into the training process are tested to improve the speed of convergence
and strengthen the medical validity of the predictions. In the end, the networks are tested on liver data.

Summary:

1. Transfer multifocal diseases to meaningful graph structures
2. Provide conventional neural network for time to event regression as baseline
3. Investigate and tune di erent geometric deep learning architectures
4. Visualize salient graph structures

References
[1] Jane F. Desforges and John A. Kastor. Multifocal Atrial Tachycardia. New England Journal of Medicine,
322(24):1713{1717, jun 1990.
[2] John Boyages and Nathan J Coombs. Multifocal and Multicentric Breast Cancer: Does Each Focus Matter?
Article in Journal of Clinical Oncology, 23:7497{7502, 2005.
[3] Eduardo Nobile-Orazio. Multifocal motor neuropathy. Journal of Neuroimmunology, 115(1-2):4{18, apr
2001.
[4] Michael Bronstein, Joan Bruna, Yann Lecun, Arthur Szlam, and Pierre Vandergheynst. Geometric Deep
Learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4):18{42, 2017.
[5] Xi Zhang, Lifang He, Kun Chen, Yuan Luo, Jiayu Zhou, and Fei Wang. Multi-View Graph Convolutional
Network and Its Applications on Neuroimage Analysis for Parkinson’s Disease. AMIA … Annual Symposium
proceedings. AMIA Symposium, 2018:1147{1156, 2018.
[6] Jelmer M. Wolterink, Tim Leiner, and Ivana Isgum. Graph Convolutional Networks for Coronary Artery
Segmentation in Cardiac CT Angiography. In Lecture Notes in Computer Science (including subseries
Lecture Notes in Arti cial Intelligence and Lecture Notes in Bioinformatics), volume 11849 LNCS, pages
62{69. Springer, oct 2019.
[7] Petar Velickovic, Arantxa Casanova, Pietro Lio, Guillem Cucurull, Adriana Romero, and Yoshua Bengio.
Graph attention networks. 6th International Conference on Learning Representations, ICLR 2018 – Con-
ference Track Proceedings, pages 1{12, 2018.
[8] Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. Hierarchi-
cal Graph Representation Learning with Di erentiable Pooling. Advances in Neural Information Processing
Systems, 2018-Decem:4800{4810, jun 2018.

Semi-Supervised Tooth Segmentation in Dental Panoramic Radiographs Using Deep Learning

In dentistry dental panoramic radiographs are used by specialists to complement the clinical examination in the diagnosis of dental diseases, as well as in planning the treatment. They allow the visualization of dental irregularities, such as missing teeth, bone abnormalities, tumors, fractures and others. Dental panoramic radiographs are a form of extra-oral radio- graphic examination, meaning the patient is positioned between the radiographic film and the X-ray source. The scan describes a half-circle from ear to ear, showing a two-dimensional view of upper and lower jaw. In contrast to the intra-oral radiographs, like bitewing and periapical radiographs, dental panoramic radiographs are not restricted to an isolated part of the teeth and also show the skull, chin, spine and other details originated from the bones of the nasal and face areas, making these images much more difficult to analyze.

An automatic segmentation method to isolate parts of dental panoramic radiographs could be a beginning of helping dentists in their diagnoses. Tooth segmentation could be the first step towards an automated analysis of dental radiographs. In this thesis the labeled data by Jader et al.  will be used, supplemented by a dataset of 120.000 unlabeled images, provided by the University Hospital Erlangen. It will be investigated how we can achieve reasonable segmentation results on a large unlabeled dataset, utilizing a smaller annotated dataset from a different source. For this purpose different bootstrapping methods will be analyzed, to improve the segmentation results using semi- supervised learning.

Age Estimation on Panoramic Dental X-ray Images Using Deep Learning

X-rays are widely used in diagnostic medical imaging – in this Bachelor thesis, they will be used for automatic age determination. Thereby radiographs of the jaw can provide important clues about the age of the person because dental growth is less influenced by diet and hormones than skeletal growth. Compared with histological and biochemical methods is X-ray imaging significantly faster and facile.

As dental tissue is usually very well preserved after death and remains fairly unchanged for thousands of years, its analysis is widely used in forensics. Age determination on living persons is carried out to determine whether the child has reached the age of criminal responsibility or the majority if the birth certificate is not available.

However, the accuracy of the age determination by physicians is always doubted. On aver- age, the age estimation for children and adolescents differs by about half a year and about two years in the case of particularly serious inaccurate estimates. For adults, the result is usually even less accurate. Therefore, in the context of this bachelor thesis, an attempt will be made to develop a deep learning algorithm for age estimation. Since promising results have already been achieved with Deep Learning in other areas of medical image analysis – automated solutions could support physicians in estimating the age of the patient, in order to achieve more reliable results. The neuronal networks will be trained with a data set of 12 000 panoramic dental X-rays labeled with the age of the patients in days and provided by the University Hospital Erlangen. So the aim is to develop a supervised approach. Since convolutional neural networks (CNNs) have already achieved good results in other areas of medical image analysis [4], they will also be used for this task.

Weakly Supervised Learning for Multi-modal Breast Lesion Classification in Ultrasound and Mammogram Images

Breast cancer has become one of the most common and leading types of cancer in women, which has taken death rate of 11.6 percent of the total cancer deaths worldwide. The mortality rate is increasing in recent years. It must be emphasized that early detection of a breast tumor can help to increase early treatment options that control the mortality rate among women. There are different diagnostic imaging modalities, which help doctors diagnose whether the patient is under the risk of possessing cancerous tumor.

Imaging modalities like ultrasound and mammogram are both used for screening of breast lesions. Mammogram, on the one hand, uses low radiation dose and takes an image of the breast as a 2-D image. Ultrasound, on the other hand, uses high frequency waves capturing an image of the breast as a 3-D image. Both modalities capture different useful information with their acquisition methods. Patients usually undergo diagnosis with mammography for initial lesion detection. But due to its low sensitivity, there are chances to miss detection of small tumors in heavy and dense breasts. Those patients that are highly suspected to the abnormalities are further diagnosed with ultrasound. Ultrasound images give more detailed information about the surrounding area of concern and hence also help radiologists investigate the vulnerability of the lesion.

The main aim of this thesis is to investigate the performance of deep learning models for classification of breast lesion using a dataset of ultrasound and mammogram images individually. Further, based on the evaluation of the performance results of these models, we would build a single deep learning model, which combine the information from both ultrasound and mammogram imaging modalities. An analysis of the performance of the fused and the individual models will also be performed.
The dataset which will be used to train the models consists of volumetric ultrasound images and 2-D mammogram images and is provided by University Clinics Erlangen. Weakly supervised approaches will be used with the classification labels defined at image level without further localisation. There are 468 patient files consisting of ultrasound and mammogram images of healthy and non-healthy patients. The latter can have either benign or malignant lesions.

Unsupervised Domain Adaptation using Adversarial Learning for Multi-model Cardiac MR Segmentation

Recently, numerous adversarial learning based domain adaptation methods for semantic segmentation have been proposed. For example, Vu et al. minimized entropy of the prediction and also introduced the entropy discriminator to discriminate the source entropy maps from the target entropy maps. In 2018, Tsai et al. found output space contains rich information thus, they proposed the output space discriminator. Both of the methods have achieved promising results in street scene segmentation, while for medical image segmentation, we can take advantage of the information in the shape of the organs. For instance, point clouds can be used to create 3D models to incorporate shape representation as prior information. Cai et al. introduced the organ point network. It takes deep learning features as input and generates the shape representation as a set of points located on the organ surface. They optimized the segmentation task with the point network as an auxiliary task so that the shared parameters could benefit from both tasks. They also proposed a point cloud discriminator to guide the model to capture the shape information better.

We aim to combine the ideas from the previous works and investigate the impact of output space and entropy discriminators for multi-modality cardiac image segmentation. We want to employ point cloud classification as an auxiliary task, and introduce a point cloud discriminator to discriminate the source point cloud from the target point cloud.

Marker Detection Using Deep Learning for Universal Navigation Interface

In the contemporary practice of medicine, minimally invasive spine surgery (MISS) is widely performed to avoid
the damage to the muscles surrounding the spine. Compared with traditional open surgeries, patients with MISS
suffer from less pain and can recover faster. For MISS, computer assisted navigation systems play an very important
role. Image guided navigation can deliver more accurate pedicle screw placement compared to conventional surgical
techniques. It also reduces the amount of X-ray exposure to surgeons and patients. In computer assisted navigation
for MISS, registration between preoperative images (typically 3D CT volumes) and intraoperative images (typically
2D fluoroscopic X-ray images) is usually a step of critical importance. To perform such registration, various markers
[1] are used. Such markers need to be identified in the preoperative CT volumes. In practice, due to the limited
detector size, the markers might be located outside the field-of-view of the imaging systems (typically C-arm or Oarm
systems) for large patients. Therefore, the markers are only acquired in projections of a certain view angles. As a
consequence, the reconstructed markers in the 3D CT volumes suffer from artifacts and have distorted shapes, which
cause difficulty for marker detection. In the scope of this master’s thesis, we aim to improve the image quality of CT
reconstructions from such truncated projections using deep learning [2, 3] so that a universal navigation interface is
able to detect markers without any vendor specific information. Alternatively, general marker detection directly in
X-ray projection images before 3D reconstruction using deep learning will also be investigated.

The thesis will include the following points:

 Literature review on deep learning CT truncation correction and deep learning marker detection;

 Simulation of CT data with various marker sizes and shapes;

 Implementation of our U-Net based deep learning method [3] with extension to high resolution reconstruction;

 Performance evaluation of our U-Net based deep learning method on the application of marker reconstruction;

 Investigation of deep learning methods on marker segmentation directly in 2D projections;

 Reconstruction of 3D markers based on segmented marker projections.

References
[1] S. Virk and S. Qureshi, “Navigation in minimally invasive spine surgery,” Journal of Spine Surgery, vol. 5,
no. Suppl 1, p. S25, 2019.
[2] ´ E. Fourni´e, M. Baer-Beck, and K. Stierstorfer, “CT field of view extension using combined channels extension
and deep learning methods,” in Proceedings of Medical Imaging with Deep Learning, 2019.
[3] Y. Huang, L. Gao, A. Preuhs, and A. Maier, “Field of view extension in computed tomography using deep learning
prior,” in Bildverarbeitung f¨ur die Medizin 2020, pp. 186–191, Springer, 2020.

Synthetic Image Rendering for Deep Learning License Plate Recognition

The recognition of license plates is usually considered a rather simple task, that a human
is perfectly capable of. However, there exist many factors (e.g. fog, rain), that can
signicantly worsen the image quality and therefore increase the diculty of recognizing
a license plate. In addition, further factors e.g. low resolution or small size of the license
plate section may increase the diculty up to a point, where even humans are unable to
identify it.
A possible approach to solve this problem is to build and train a neural network using
collected image data. In theory, this should yield a high success rate and outperform a
human. However, a huge number of images, that also fulll certain criteria, is needed in
order to reliably recognize plates in dierent situations.
That is the reason why this thesis aims at building and training a neural network, that is
based on an existing CNN [1], for recognizing license plates using training data, which is
articially created. This ensures enough images are provided, while facilitating the possibility
of adding image eects to simulate many possible situations. The needed images
can be created using Blender: It oers the option to create a 3D model of a license plate,
as well as options to simulate certain weather conditions like fog or rain, while also providing
an API to automate the creation process. This way, nearly all cases can be covered
and the described procedure maximizes the success rate of the license plate detection.

The thesis consists of the following steps:

ˆ Creating a training data set consisting of generated license plate images (Blender
Python API)

ˆ Fitting the parameters of the Deep Learning model

ˆ Evaluation of the model t on datasets with real license plate images

Literatur
[1] Benedikt Lorch, Shruti Agarwal, and Hany Farid. Forensic Reconstruction of Severely
Degraded License Plates. In Society for Imaging Science & Technology, editor,
Electronic Imaging, Jan 2019.