Computed Tomography (CT) is one of the most important modality in modern medical imaging, providing invaluable cross-sectional anatomical information crucial for diagnosis, treatment planning, and disease monitoring. Despite its widespread utility, the quality of CT images can be significantly degraded by various artifacts arising from physical limitations, patient-related factors, or system imperfections. These artifacts, manifesting as streaks, blurs, or distortions, can obscure critical diagnostic details, potentially leading to misinterpretations and compromising patient care. While traditional iterative reconstruction and early deep learning methods have offered partial solutions, they often struggle with complex artifact patterns or may introduce new inconsistencies. Recently, diffusion models have emerged as a powerful generative paradigm, demonstrating remarkable success in image synthesis and restoration tasks by progressively denoising an image from a pure noise distribution. Concurrently, Transformer architectures, with their inherent ability to capture long-range dependencies via self-attention mechanisms, have shown promise in various vision tasks. This thesis investigates the potential of Diffusion Transformer, for comprehensive CT artifact compensation. By synergizing the iterative refinement capabilities of diffusion models with the global contextual understanding of Transformers, this work aims to develop a robust framework capable of effectively mitigating a wide range of CT artifacts, thereby enhancing image quality and improving diagnostic reliability. This research explores the design, implementation, and rigorous evaluation of such a model, comparing its performance against existing state-of-the-art techniques.
Diffusion Transformer for CT artifacts compensation
Type: MA thesis
Status: running
Date: May 15, 2025 - November 15, 2025
Supervisors: Yipeng Sun, Andreas Maier