Deep Learning based Model Observers for Multi-modal Imaging

Type: Project

Status: finished

Date: May 1, 2020 - November 24, 2020

Supervisors: Mayank Patwari, Maximilian Reymann

Task based measurements are needed to measure the quality of medical images. Common task based measures include the use of model observers. Model observers are used to measure the confidence that an object (eg. A tumor or another structure) is present in a particular image.  Common model observers in medical imaging include the Chanellised Hotelling Observer for CT image quality, or the Non Pre Whitening Filter for SPECT image quality.

Current implementations of model observers for task based measurements are executed on phantoms. The use of phantoms makes the use of task based measurements an SKE/BKE task. However, this means that the use of task based measurements cannot be directly moved into a clinical task without prior knowledge. Moreover, multiple noise realisations of a single phantom are needed to get meaningful results from a model observer.

Deep learning has been used to replicate the behaviour of model observers. However, there is no work done on a general model observer which can work across imaging modalities. In this project, we would be interested in investigating the following:

  1. The possibility of using deep learning to create a ‘general’ model observer
  2. Cross modality performance of a deep learned model observer
  3. Training this model observer with zero-, one-, or few- shot learning for greater future generalisation.

We would look for someone who could support us with the following skills:

  1. Knowledge in Python/C++ programming
  2. Some knowledge on image processing. Medical image processing and DICOM standards are a plus.
  3. Knowledge of relevant libraries like NumPy, OpenCV, PyTorch, TensorFlow
  4. Experience with model observers is a plus (not strictly necessary)