Index

The UKER BrainMet Dataset: A brain metastasis dataset from University Hospital Erlangen

Brain Metastasis Synthesis Using Deep Learning in MRI Images

GAN-based Synthetic Chest X-ray Generation for Training Lung Disease Classification Systems

Project description

With the rise and ever-growing potential of Deep Learning (DL) techniques in recent years, completely new opportunities have emerged in the field of medical image processing, in particular in fundamental application areas such as image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. However, DL techniques are known to require very large amounts of data to train the neural networks (NN), which can sometimes be a problem due to limited data availability. In recent years, the public release of medical image data has increased and has led to significant advances in the scientific community. For instance, publicly availabe large-scale chest X-ray datasets enabled the development of novel systems for automated lung abnormality classification [1, 2]. In recent work, however, it has been shown that DL techniques can also be used maliciously, e. g., for linkage attacks on public chest X-ray datasets [3]. This constitutes a tremendous issue in terms of data security and patient privacy, as a potential attacker may leak available information (e. g. age, gender, diseases, and more) about a specific patient present in a public dataset. To alleviate privacy concerns, the question now arises whether the exlusive use of synthetically generated images can represent a serious alternative for the development of diagnostic algorithms in the medical field.

In this work, we investigate whether synthetically generated chest X-ray images can be used to train a reliable classification system for lung diseases. Therefore, we will use different approaches, e. g., [4–6], to synthesize realistic looking chest X-ray scans from a real data distribution. In doing so, we will focus on ensuring that characteristic disease patterns will be preserved in the generated images. For our experiments, we will employ the NIH ChestX-ray14 [7] dataset, a collection of 112,120 frontal-view chest X-ray images from 30,805 unique patients with the text-mined fourteen disease image labels.

The Master’s thesis covers the following aspects:

  1. Overview of the current state-of-the-art in DL for the generation of synthetic medical image data.
  2. Building one or multiple GAN-based image generation networks which includes:
    • Hyper-parameter tuning
    • Analyzing the performance of the networks
    • Analyzing the realism of the generated images
  3. Evaluating the feasibility of using synthetically generated chest X-ray images for training a lung disease classification system.
  4. Outlining strategies and research directions to enhance the preservation of patient privacy in public datasets (optional).

All DL implementations will be implemented using PyTorch [8].

 

References

[1] S. Gündel, S. Grbic, B. Georgescu, S. Liu, A. Maier, and D. Comaniciu, “Learning to Recognize Abnormalities in Chest X-Rays with Location-Aware Dense Networks,” in Iberoamerican Congress on Pattern Recognition, pp. 757–765, Springer, 2018.

[2] S. Gündel, A. A. Setio, F. C. Ghesu, S. Grbic, B. Georgescu, A. Maier, and D. Comaniciu, “Robust classification from noisy labels: Integrating additional knowledge for chest radiography abnormality assessment,” Medical Image Analysis, vol. 72, p. 102087, 2021.

[3] K. Packhäuser, S. Gündel, N. Münster, C. Syben, V. Christlein, and A. Maier, “Is Medical Chest X-ray Data Anonymous?,” arXiv preprint arXiv:2103.08562, 2021.

[4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” Advances in Neural Information Processing Systems, vol. 27, 2014.

[5] M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” arXiv preprint arXiv:1411.1784, 2014.

[6] A. Odena, C. Olah, and J. Shlens, “Conditional Image Synthesis with Auxiliary Classifier GANs,” in International Conference on Machine Learning, pp. 2642–2651, PMLR, 2017.

[7] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

[8] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” Advances in Neural Information Processing Systems, vol. 32, pp. 8026–8037, 2019.

 

Exploring Style-transfer techniques on Greek vase paintings for enhancing pose-estimation

Multi-stage Patch based U-Net for Text Line Segmentation of Historical Documents

Deep Learning-based Bleed-through Removal in Historical Documents

Disentangling Visual Attributes for Inherently Interpretable Medical Image Classification

M

 

Project description:

Interpretability is essential for a deep neural network approach when applied to crucial scenarios such as medical image processing. Current gradient-based [1] and counterfactual image-based [2] interpretability approaches can only provide information of where the evidence is. We also want to know what the evidence is. In this master thesis project, we will build an inherently interpretable classification method. This classifier can learn disentangled features that are semantically meaningful and, in the future, corresponding to related clinical concepts.

 

This project based on a previous proposed visual feature attribution method in [3]. This method can generate class relevant attribution map for a given input disease image. We will extend this method to generate class relevant shape variations and design an inherently interpretable classifier only using the disentangled features (class relevant intensity variation and shape variation). This method can be further extended by disentangling more semantically meaningful and causal independent features such as texture, shape, and background as the work in [4].

 

References

[1] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.

[2] Cher Bass, Mariana da Silva, Carole Sudre, Petru-Daniel Tudosiu, Stephen M Smith, and Emma C Robinson. Icam: Interpretable classifi- cation via disentangled representations and feature attribution mapping. arXiv preprint arXiv:2006.08287, 2020.

[3] Christian F Baumgartner, Lisa M Koch, Kerem Can Tezcan, Jia Xi Ang, and Ender Konukoglu. Visual feature attribution using wasserstein gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8309–8319, 2018.

[4] Axel Sauer and Andreas Geiger. Counterfactual generative networks. arXiv preprint arXiv:2101.06046, 2021.

MR automated image quality assessment

Virtual contrast enhancement of breast MRI using Deep learning

Temporal Information in Glacier Front Segmentation Using a 3D Conditional Random Field