Fabian Wagner

Symbolbild zum Artikel. Der Link öffnet das Bild in einer großen Anzeige.

Hybrid Machine Learning Approaches for Image Reconstruction and Processing in Low-Dose Computed Tomography

Diagnostic decisions in clinical workflows often hinge on the interpretation of medical images. The inherent quality of these images plays a critical role in discerning relevant anatomical and pathological features. Factors such as radiation exposure, acquisition duration, and patient motion directly influence the resolution, noise levels, and artifacts present in medical image data. In particular, patient dose caused by imaging with high-energetic X-rays is a decisive factor as X-ray exposure causes stochastic damage and potentially cancerous changes in living tissue. Therefore, trade-offs between dose levels and image quality are required, keeping noise levels sufficiently low to resolve relevant structures while minimizing patient dose. The integration of computational methods can contribute to improving image quality through various stages of image reconstruction and post-processing. Conventional techniques introduce image filters, designed based on structural image properties, to remove noise and restore relevant image features. While these filters rely on established algorithms, they often require manual adaption to the specific imaging problem. In contrast, machine learning-driven approaches, leveraging artificial neural networks, enable data-centric optimization by automatically modeling an underlying training data distribution. However, these methods’ complex feature extraction mechanisms limit interpretability and the ability to handle previously unseen structures. This work develops different denoising techniques, combining well-understood image processing algorithms with learning-based techniques to improve image quality while maintaining interpretability and robustness of the algorithms. Within the context of this thesis, two trainable denoising operators based on the bilateral filtering principle are presented. Combining the smoothing of homogeneous image areas with high-frequency edge preservation provides a reliable denoising filter by design. The proposed computation of analytical filter derivatives enables gradient-based optimization of all filter parameters to automatically adapt to the training data distribution. Consequently, this method delivers competitive results with respect to deep neural networks that contain five to six orders of magnitude more trainable parameters. The prediction reliability of the proposed bilateral filtering-based techniques is validated through an additional study, confirming the filter’s robustness across samples that extend beyond the training data domain. Given the scarcity of suitable ground-truth medical image data for training purposes, different self-supervised methods that obviate the necessity for noise-free reference data were developed. In this context, learned denoising operators, intervening at different stages of the Computed Tomography (CT) reconstruction pipeline, are explored. Experiments demonstrate that simultaneous denoising within the projection data domain and the reconstructed image domain improves denoising effectiveness compared to existing post-processing approaches. Lastly, a self-supervised denoising operator training scheme is proposed, using routinely acquired complementary images of different contrasts as training targets. Here, image quality is improved over the existing self-supervised methodologies on clinical image data. Open-source published code repositories of the developed universal filter operators and CT geometry rebinning code leverages the usability of developed algorithms for fellow researchers and their potential integration into clinical workflows.