Multi-Frame Super-Resolution Reconstruction with Applications to Medical Imaging
The optical resolution of a digital camera is one of its most crucial parameters with broad relevance for consumer electronics, surveillance systems, remote sensing, or medical imaging. However, resolution is physically limited by the optics and sen- sor characteristics. In addition, practical and economic reasons often stipulate the use of out-dated or low-cost hardware. Super-resolution is a class of retrospec- tive techniques that aims at high-resolution imagery by means of software. Multi- frame algorithms approach this task by fusing multiple low-resolution frames to reconstruct high-resolution images. This work covers novel super-resolution methods along with new applications in medical imaging.
The first contribution of this thesis concerns computational methods to super- resolve image data of a single modality. The emphasis lies on motion-based algo- rithms that are derived from a Bayesian statistics perspective, where subpixel mo- tion of low-resolution frames is exploited to reconstruct a high-resolution image. More specifically, we introduce a confidence-aware Bayesian observation model to account for outliers in the image formation, e. g. invalid pixels. In addition, we propose an adaptive prior for sparse regularization to model natural images ap- propriately. We then develop a robust optimization algorithm for super-resolution using this model that features a fully automatic selection of latent hyperparam- eters. The proposed approach is capable of meeting the requirements regarding robustness of super-resolution in real-world systems including challenging con- ditions ranging from inaccurate motion estimation to space variant noise. For in- stance, in case of inaccurate motion estimation, the proposed method improves the peak-signal-to-noise ratio (PSNR) by 0.7 decibel (dB) over the state-of-the-art.
The second contribution concerns super-resolution of multiple modalities in the area of hybrid imaging. We introduce novel multi-sensor super-resolution techniques and investigate two complementary problem statements. For super- resolution in the presence of a guidance modality, we introduce a reconstruction algorithm that exploits guidance data for motion estimation, feature driven adap- tive regularization, and outlier detection to reliably super-resolve a second modal- ity. For super-resolution in the absence of guidance data, we generalize this ap- proach to a reconstruction algorithm that jointly super-resolves multiple modali- ties. These multi-sensor methodologies boost accuracy and robustness compared to their single-sensor counterparts. The proposed techniques are widely appli- cable for resolution enhancement in a variety of multi-sensor vision applications including color-, multispectral- and range imaging. For instance in color imag- ing as a classical application, joint super-resolution of color channels improves the PSNR by 1.5 dB compared to conventional channel-wise processing.
The third contribution transfers super-resolution to workflows in healthcare. As one use case in ophthalmology, we address retinal video imaging to gain spatio- temporal measurements on the human eye background non-invasively. In order to enhance the diagnostic usability of current digital cameras, we introduce a frame- work to gain high-resolution retinal images from low-resolution video data by exploiting natural eye movements. This framework enhances the mean sensitiv- ity of automatic blood vessel segmentation by 10 % when using super-resolution for image preprocessing. As a second application in image-guided surgery, we investigate hybrid range imaging. To overcome resolution limitations of current range sensor technologies, we propose multi-sensor super-resolution based on domain-specific system calibrations and employ high-resolution color images to steer range super-resolution. In ex-vivo experiments for minimally invasive and open surgery procedures using Time-of-Flight (ToF) sensors, this technique im- proves the reliability of surface and depth discontinuity measurements compared to raw range data by more than 24 % and 68 %, respectively.