Despite many medical breakthroughs, breast cancer is still one of the main causes of death caused by malignant disease among women. It accounts for one out of ten cancer diagnoses each year [1]. Due to its silent evolution, breast cancer diagnosis requires regular screening. The gold standard for this procedure is X-ray mammography, which is particularly used in older women. As with most cancers, survival rate increases with early diagnosis significantly.
Commonly, diagnosing breast cancer requires a trained professional radiologist, who examines the individual mammography images of a patient by adjusting different image properties, such as brightness or contrast, for better visualization of anatomical or pathological structures. This form of manual diagnosing is time- and resource consuming. Furthermore, the manual diagnostic process is associated with a high risk of false positives and false negatives [2] as the diagnosis, to a certain extend, is subject to the radiologist’s interpretation. Therefore, the demand of accelerating and supporting the diagnostic process has increased in recent years. Additionally, the rapid advancement of machine learning has led to the rise of new research focusing on classifying malignant structures in medical imaging, especially in tasks like mammography, facilitated by deep learning.
Early detection of malignant structures in mammography images with the help of deep learning is a challenging task for various reasons. Most publicly available databases lack annotations, preventing deep learning models from unfolding their true potential of discovering the desired malignant region of interest. Furthermore image properties, such as the overall brightness and the contrast, may differ, because of different acquisition protocols or acquisition models. Large variations of image properties can introduce further noise, which can not be addressed by simply adjusting the window-width and the window-level of the displayed image. This inhomogeneity can cause the Machine Learning model performance to worsen.
For this work, Full-Field Digital Mammography (FFDM) images from 283 patients, provided by the Women’s Hospital of the University Hospital in Erlangen, are inspected and processed. About 15% of the acquired data shows large variations with regards to brightness and contrast, introducing inhomogeneity to the data and preventing a deep learning model from accurately detecting breast lesions. This problem can be addressed by removing inhomogeneous training samples. While such an approach would improve the classification performance compared to training with the total dataset, it also keeps the model from leveraging all available information, as the amount of training samples is reduced.
This thesis aims at analyzing and solving this challenge by transforming inhomogeneous images into homogeneous ones. Thereby, increasing the amount of available training samples while simultaneously reducing the influence of inhomogeneous data. The transformation is done with the help of generative learning, on the described dataset. The proposed method would approximate the joint probability P(x,y) of an original (inhomogeneous) image x and a generated (homogeneous) image y using a generator. The mentioned method builds up on Armanious et al. work in which medical images were translated into various domains with the help of generative learning. Their frameworks, MedGAN [3] and Cycle-MedGAN [4] utilize conditional Generative Adversarial Networks (cGANs) to learn a mapping between the original source domain and the synthetic target domain in an unsupervised manner.
The thesis consists of the following milestones:
- First, analyzing the performance of a baseline model for the detection of breast lesions using a reduced (homogeneous) portion of the dataset.
- Second, building and optimizing the GAN-models for the homogenization of mammograms.
- Finally, evaluating the lesion detection performance when including the homogenized mammograms on the training process and comparing its performance with the baseline model.
- Additionally, if time allows it: retraining and evaluating the models on a publicly available dataset.
[1]Fadi M. Alkabban and Troy Ferguson. Breast cancer. InStatPearls (Internet). StatPearls Publishing, 2019.
[2]Li Shen, Laurie R Margolies, Joseph H Rothstein, Eugene Fluder, Russell B McBride, and Weiva Sieh. Deep learning to improve breast cancer early detection on screening mammography.arXiv preprintarXiv:1708.09427, 2017.
[3]Karim Armanious, Chenming Yang, Marc Fischer, Thomas Küstner, Konstantin Nikolaou, Sergios Gatidis,and Bin Yang. Medgan: Medical image translation using gans. CoRR, abs/1806.06397, 2018.
[4]Karim Armanious, Chenming Jiang, Sherif Abdulatif, Thomas Küstner, Sergios Gatidis, and Bin Yang. Unsupervised medical image translation using cycle-medgan.CoRR, abs/1903.03374, 2019.