Model-Constrained Non-Rigid Registration in Medicine
The aim of image registration is to compute a mapping from one image’s frame of reference to another’s, such that both images are well aligned. Even when the mapping is assumed to be rigid (only rotation and translation) this can be a quite challenging task to accomplish between different image modalities. Noise and other imaging artifacts like bias fields in magnetic resonance (MR) imaging or streak artifacts in computed tomography (CT) can pose additional problems. In non-rigid image registration these problems are further compounded by the additional degrees of freedom in the transform.
Another problem is that the non-rigid registration problem is usually ambiguous: Different deformation fields can lead to equally well aligned images. Nevertheless, one would prefer deformations that coincide with medical or physiological expectations. For instance, in MR low intensity image values can indicate bones as well as air. We would prefer a registration result that only maps bone to bone and air to air, even though matching air to bone might lead to a visually similar result.
This work strives to address some of these problems. In a first step we provide a solid non-rigid registration algorithm. We compare several optimization algorithms, to ensure that the registration result is at least numerically as good as possible. We also explore how the parameter determining the global stiffness of the computed transform can be specified in a way that yields predictable results. In a second step we want to integrate prior information about the desired deformation into this registration algorithm. Two types of prior information are considered in this work:
The first are known point correspondences that explicitly specify the desired deformation for some parts of the images. This provides a very straightforward way for a user to interact with the registration algorithm. The known correspondences are efficiently integrated into the registration algorithm, which allows the specification of arbitrary number of correspondences and the application of the approach in 2d and 3d. As the landmarks are treated as hard constraints it is guaranteed that they are matched exactly. It is shown that this additional information can immensely benefit the registration result, especially in difficult cases like the registration of relatively unrelated imaging modalities like positron emission tomography (PET) and CT.
The second type of information is provided in the form of training deformations reflecting the kinds of deformation usually encountered in an application. These are used to generate a model which can be used to guide the registration to a result that is similar to the training data. We consider two variants of statistical deformation models. Either the model is generated and applied on the deformations themselves or on their Laplacian. The latter has the advantage of being inherently invariant to remaining rigid misalignments in the training data. They are applied in the context of atlas registration for MR/PET attenuation correction. An template CT image is registered with the patient MR to generate a pseudo-CT of the patient that can be used for the PET attenuation correction. However, the different intensity distributions in CT and MR, effects like bias fields and the low inter-slice resolution common in MR imaging, make the multi-modal registration prone to errors. The deformation model, learned from a set of mono-modal registrations, is used to constrain and thus improve the multi-modal registration. The algorithm is evaluated on a set of patient data for which the ground-truth CT scan is available. This allows the evaluation of the atlas registration results through a direct comparison with the ground truth CT data. Our experiments show that the registration employing the statistical deformation models yields generally improved results.