Registration-Error Invariant Supervised Optimization of Neural Networks

Type: MA thesis

Status: open

Supervisors: Florian Thamm, Aleksandra Thamm, Andreas Maier

Pixel-to-pixel domain transfer remains to be in many fields a challenging task. Problems of this kind get even more difficult if the target image and the input image are slightly de-registered from each other. A supervised training in this configuration leads to non-satisfying and blurry results as the pixel-to-pixel correspondence is violated. Discriminator-generator learning mechanisms, also known as GANs can counteract such problem and can be used to enhance the supervision by a registration-error invariant extension of the learning procedure. However, especially in medical imaging, GANs are carefully used as new and unwanted structures might be introduced while the domain transfer is being performed. Hence, the Pattern Recognition Lab hereby offers the ability to research on supervised slightly-deregistered pixel-to-pixel image transfers based on Computed Tomography head scans. The aim of this thesis is to develop a learning mechanism which is robust against registration errors between a target and some input and ideally registration-error invariant. Ideas are the special use of spatial transformers combined with losses and/or the use of a adaption of the CTC loss which is well known in speech processing.

Here is an example of a domain transfer from slightly de-registered targets and inputs. On the left is the target image which is slighty de-registered to the input of network that must transfer into the target’s domain. On the right is a supervised variant (Sup), blurry and contains rather low frequency information. In the center is the same architecture as on the right but enhanced with a discriminator learning setup (Sup + GAN) returning more crisper results with higher frequencies.


This thesis can either be a Bachelor’s or a Master’s Thesis. Knowledge in Deep Learning is required. If you are interested in this topic, please forward your CV to and We are looking forward to hearing from you.