Task based measurements are needed to measure the quality of medical images. Common task based measures include the use of model observers. Model observers are used to measure the confidence that an object (eg. A tumor or another structure) is present in a particular image. Common model observers in medical imaging include the Chanellised Hotelling Observer for CT image quality, or the Non Pre Whitening Filter for SPECT image quality.
Current implementations of model observers for task based measurements are executed on phantoms. The use of phantoms makes the use of task based measurements an SKE/BKE task. However, this means that the use of task based measurements cannot be directly moved into a clinical task without prior knowledge. Moreover, multiple noise realisations of a single phantom are needed to get meaningful results from a model observer.
Deep learning has been used to replicate the behaviour of model observers. However, there is no work done on a general model observer which can work across imaging modalities. In this project, we would be interested in investigating the following:
- The possibility of using deep learning to create a ‘general’ model observer
- Cross modality performance of a deep learned model observer
- Training this model observer with zero-, one-, or few- shot learning for greater future generalisation.
We would look for someone who could support us with the following skills:
- Knowledge in Python/C++ programming
- Some knowledge on image processing. Medical image processing and DICOM standards are a plus.
- Knowledge of relevant libraries like NumPy, OpenCV, PyTorch, TensorFlow
- Experience with model observers is a plus (not strictly necessary)