Statistical Intensity Prior Models with Applications in Multimodal Image Registration
Deriving algorithms that automatically align images being acquired from different sources (multimodal image registration) is a fundamental problem that is of importance to several active research areas in image analysis, computer vision, and medical imaging. In particular, the accurate estimation of deformations in multimodal image data perpetually engages researchers while playing an essential role in several clinical applications that are designed to improve available healthcare. Since the field of medical image analysis has been rapidly growing for the past two decades, the abundance of clinical information that is available to medical experts inspires more automatic processing of medical images.
Registering multimodal image data is a difficult task due to the tremendous variability of possible image content and diverse object deformations. Motion patterns in medical imaging mostly originate from cardiac, breathing, or patient motion (i.e. highly complex motion patterns), and the involved image data may be noisy, furnished with image reconstruction artifacts, or rendered with occluded image information resulting from imaged pathologies. A key problem with methods reported in the literature is that they purely rely on the quality of the available images and have, therefore, difficulties in reliably finding an accurate alignment when the underlying multimodal image information is noisy or corrupted.
In this research, we leverage prior knowledge about the intensity distributions of accurate image alignments for robust and accurate registration of medical image data. The following contributions to the field of multimodal image registration are made. First, we developed a prior model called integrated statistical intensity prior model that incorporates both current image information and prior knowledge. It shows an increased capture range and robustness on degenerate clinical image data compared to traditional methods. Second, we developed a generalization of the first model that allows for modeling all available prior information and greater accuracy in aligning clinical multimodal image data. The models are formulated in a unifying Bayesian framework that is embedded in the statistical foundations of information theoretic similarity measures. Third, we applied the proposed models to two clinical applications and validated their performance on a database of approximately 100 patient data sets. The validation is performed using a systematic framework and we further developed a criteria for assessing the quality of non-rigid or deformable registrations.
The experiments on synthetic and real, clinical images demonstrate the superior performance, i.e. in terms of robustness and accuracy, of statistical intensity prior models to traditional registration methods. This suggests that fully automatic multimodal registration (i.e. rigid and non-rigid) is achievable for clinical applications. Statistical intensity prior models deliver great accuracy from a “relatively small” amount of prior knowledge when compared to traditional machine learning approaches that is appealing in both theory and in practice.