Computed Tomography (CT) is a diagnostic tool that allows doctors or radiologists to visualize the internal morphology of the body. Radiologists compare CT studies to identify tumors, infections, blood clots, and to assess the response to treatment. To identify changed features, radiologists visually compare the current with a prior study. They align both studies while scrolling through the images and switch between the acquisitions to identify relevant changes.
Overlaying the current study with a color-coded confidence mask indicative of changes is a helpful tool to mark areas with potential changes. To compute such a mask, a registration of both datasets is required. Here, inaccurate registrations can introduce misalignments, which will be marked as tissue changes but are not of clinical relevance. Such misalignments can cause shadow-like effects at tissue boundaries, which can obscure pathologically relevant features. Another source of non-significant changes is related to different acquisition parameters, resulting in salt and pepper noise.
The goal of this master thesis is to train a deep learning model to detect and remove non-significant changes. Generative Adversarial Networks (GANs) have shown promising results on image processing tasks with no or limited ground truth data available. GANs consist of two models, Generator, and Discriminator that by design learn the distribution of the training data. The generator model generates fake data to be fed to the discriminator which aims to identify fake examples. With this adversarial training method we aim to leverage the image quality of the difference images and, hence, enable easier identification of non-significant changes by the physician.