Hybrid RGB/Time-of-Flight Sensors in Minimally Invasive Surgery
Nowadays, minimally invasive surgery is an essential part of medical interventions. In a typical clinical workflow, procedures are planned preoperatively with 3-dimensional (3-D) computed tomography (CT) data and guided intraoperatively by 2-dimensional (2-D) video data. However, accurate preoperative data acquired for diagnose and operation planning is often not feasible to deliver valid information for orientation and decisions within the intervention due to issues like organ movements and deformations. Therefore, innovative interventional tools are required to aid the surgeon and improve safety and speed for minimally invasive procedures. Augmenting 2-D color information with 3-D range data allows to use an additional dimension for developing novel surgical assistance systems. Here, Time-of-Flight (ToF) is a promising low-cost and real-time capable technique that exploits reflected near-infrared light to estimate the radial distances of points in a dense manner. This thesis covers the entire implementation pipeline of this new technology into a clinical setup, starting from calibration to data preprocessing up to medical applications.
The first part of this work covers a novel automatic calibration scheme for hybrid data acquisition based on barcodes as recognizable feature points. The common checkerboard pattern is overlaid by a marker that includes unique 2-D barcodes. The prior knowledge about the barcode locations allows to detect only valid feature points for the calibration process. Based on detected feature points seen from different points of view a sensor data fusion for the complementary modalities is estimated. The proposed framework achieved subpixel reprojection errors and barcode identification rates above 90% for both the ToF and the RGB sensor.
As range data of low-cost ToF sensors is typically error-prone due to different issues, e.g. specular reflections and low signal-to-noise ratio (SNR), preprocessing is a mandatory step after acquiring photometric and geometric information in a common setup. This work proposes the novel concept of hybrid preprocessing to exploit the benefits of one sensor to compensate for weaknesses of the other sensor. Here, we extended established preprocessing concepts to handle hybrid image data. In particular, we propose a nonlocal means filter that takes an entire sequence of hybrid image data into account to improve the mean absolute error of range data by 20%. A different concept estimates a high-resolution range image by means of super-resolution techniques that takes advantage of geometric displacements by the optical system. This technique improved the mean absolute error only by 12% but improved the spatial resolution simultaneously. In oder to tackle the issue of specular highlights that cause invalid range data, we propose a multi-view scheme for highlight correction. We replace invalid range data at a specific viewpoint with valid data of another viewpoint. This reduced the mean absolute error by 33% compared to a basic interpolation.
Finally, this thesis introduces three novel medical applications that benefit from hybrid 3-D data. First, a collision avoidance module is introduced that exploits range data to ensure a safety margin for endoscopes within a narrow operation side. Second, an endoscopic tool localization framework is described that exploits hybrid range data to improve tool localization and segmentation. Third, a data fusion framework is proposed to extend the narrow field of view and reconstruct the entire situs.
This work shows that hybrid image data of ToF and RGB sensors allows to improve image based assistance systems with more reliable and intuitive data for better guidance within a minimally invasive intervention.