SAM-based uncertainty aware pseudo-labeling in semi-supervised medical X-ray image segmentation

Type: MA thesis

Status: running

Date: March 1, 2026 - September 1, 2026

Supervisors: Nora Gourmelon, Vincent Christlein, Andreas Maier

Medical image segmentation, i.e., the delineation of anatomical structures including different organs, tissues or pathologic regions, constitutes a fundamental task in medical image analysis. Over the last decade, the rapid advance of AI-based methods has led to an increasing dominance of deep learning (DL) models within the field. However, a decisive limiting factor remains their demand for large-scale, finely annotated segmentation masks. As an annotation-efficient DL design, semi-supervised frameworks efficiently utilize only a scarce amount of well-annotated data combined with plentiful unlabeled samples during their training process. Two common approaches to generate unsupervised learning signals, in addition to classical supervised segmentation losses, include consistency regularization and pseudo-label based self- or co-training [1, 2].

While the first method enforces invariance of predictions under different model or data perturbations, the second approach is typically implemented as an iterative process of annotation acquisition of unlabeled images and segmentation model retraining on the enlarged labeled training set. Errors inherently contained within such pseudo-annotations often necessitate additional steps of pseudo-label refinement as misguidance caused by unreliable, noisy predictions can lead to training instability and ultimately performance degradation. Strategies such as uncertainty estimation are often utilized to select more meaningful learning targets [3-7].

Large-scale, general-purpose foundation models (FM) like Segment Anything Model (SAM) [8] trained on huge databases recently emerged as versatile tools for promptable image segmentation tasks. SAM’s remarkable zero-shot generalization ability could be demonstrated for natural images in numerous studies [9, 10]. Based on those observations, prior works [11-18] hypothesized the potential of such a generalist model to provide additional supervision in semi-supervised learning (SSL) frameworks, e.g. in the pseudo-label generation process [15-17]. As SAM depends on accurate prompt engineering, initial coarse predictions synthesized by the baseline model can serve as region proposals utilized for prompt sampling [18, 19]. Acquired output masks can then be deployed either directly for a consistency loss between both model outputs [18, 20] or alternatively for pseudo-target acquisition [16, 17].

Building on these versatile approaches, how to incorporate SAM into existing SSL methods within the medical field published so far [11-18, 20-21], within the scope of this thesis, its potential in conjunction with uncertainty estimation techniques [11, 19, 21] in the specific case of X-ray image segmentation will be investigated.

The focus of this thesis lies on the exploration of how to skillfully integrate a large-scale FM such as SAM into a semi-supervised learning paradigm as a supervisory signal. Its deployability as a credible pseudo-label generator for boosting the learning efficiency of the baseline segmentation model will be studied. Strategies such as image-level and pixel-level pseudo-label filtering based on uncertainty estimation from multiple input prompts to SAM will be incorporated to avoid possible misguidance by unreliable pseudo-labels.

 

References:   

  1. Jiao, R., Zhang, Y., Ding, L., Xue, B., Zhang, J., Cai, R., & Jin, C. (2024). Learning with limited annotations: A survey on deep semi-supervised learning for medical image segmentation. Computers in Biology and Medicine, 169, 107840. https://doi.org/10.1016/j.compbiomed.2023.107840
  2. Lu, L., Yin, M., Fu, L., & Yang, F. (2023). Uncertainty-aware pseudo-label and consistency for semi-supervised medical image segmentation. Biomedical Signal Processing and Control, 79, 104203. https://doi.org/10.1016/j.bspc.2022.104203
  3. Yu, L., Wang, S., Li, X., Fu, C.-W., & Heng, P.-A. (2019). Uncertainty-Aware Self-ensembling Model for Semi-supervised 3D Left Atrium Segmentation. In Lecture Notes in Computer Science (pp. 605–613). Springer International Publishing. https://doi.org/10.1007/978-3-030-32245-8_67
  4. Dong, M., Yang, A., Wang, Z., Li, D., Yang, J., & Zhao, R. (2025). Uncertainty-aware consistency learning for semi-supervised medical image segmentation. Knowledge-Based Systems, 309, 112890. https://doi.org/10.1016/j.knosys.2024.112890
  5. Rahmati, B., Shirani, S., & Keshavarz-Motamed, Z. (2024). Semi-supervised segmentation of medical images focused on the pixels with unreliable predictions. Neurocomputing, 610, 128532. https://doi.org/10.1016/j.neucom.2024.128532
  6. Rahmati, B., Shirani, S., & Keshavarz-Motamed, Z. (2025). A hybrid approach for enhancing pseudo-labeling in medical images through pseudo-label refinement. Scientific Reports, 15(1). https://doi.org/10.1038/s41598-025-19121-4
  7. Assefa, M., Naseer, M., Ganapathi, I. I., Ali, S. S., Seghier, M. L., & Werghi, N. (2025). DyCON: Dynamic Uncertainty-aware Consistency and Contrastive Learning for Semi-supervised Medical Image Segmentation (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2504.04566
  8. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., Dollár, P., & Girshick, R. (2023). Segment Anything. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 3992–4003). IEEE. 2023 IEEE/CVF International Conference on Computer Vision (ICCV). https://doi.org/10.1109/iccv51070.2023.00371
  9. Fan, K., Liang, L., Li, H., Situ, W., Zhao, W., & Li, G. (2025). Research on Medical Image Segmentation Based on SAM and Its Future Prospects. Bioengineering, 12(6), 608. https://doi.org/10.3390/bioengineering12060608
  10. Ali, M., Wu, T., Hu, H., Luo, Q., Xu, D., Zheng, W., Jin, N., Yang, C., & Yao, J. (2025). A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives. Computerized Medical Imaging and Graphics, 119, 102473. https://doi.org/10.1016/j.compmedimag.2024.102473
  11. Lu, W., Hong, Y., & Yang, Y. (2024). UP-SAM: Uncertainty-Informed Adaptation of Segment Anything Model for Semi-Supervised Medical Image Segmentation. In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (pp. 2256–2261). IEEE. 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). https://doi.org/10.1109/bibm62325.2024.10822398
  12. Huang, K., Zhou, T., Fu, H., Zhang, Y., Zhou, Y., Gong, C., & Liang, D. (2025). Learnable Prompting SAM-Induced Knowledge Distillation for Semi-Supervised Medical Image Segmentation. IEEE Transactions on Medical Imaging, 44(5), 2295–2306. https://doi.org/10.1109/tmi.2025.3530097
  13. Mao, Y., Li, H., Lai, Y., Papanastasiou, G., Qi, P., Yang, Y., & Wang, C. (2025). Semi-Supervised Medical Image Segmentation via Knowledge Mining from Large Models (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2503.06816
  14. Zhang, Y., Zhou, T., Wu, Y., Gu, P., & Wang, S. (2024). Combining Segment Anything Model with Domain-Specific Knowledge for Semi-Supervised Learning in Medical Image Segmentation. In Lecture Notes in Computer Science (pp. 343–357). Springer Nature Singapore. https://doi.org/10.1007/978-981-97-8496-7_24
  15. Li, N., Xiong, L., Qiu, W., Pan, Y., Luo, Y., & Zhang, Y. (2023). Segment Anything Model for Semi-supervised Medical Image Segmentation via Selecting Reliable Pseudo-labels. In Communications in Computer and Information Science (pp. 138–149). Springer Nature Singapore. https://doi.org/10.1007/978-981-99-8141-0_11
  16. Häkkinen, I., Melekhov, I., Englesson, E., Azizpour, H., & Kannala, J. (2025). Medical Image Segmentation with SAM-Generated Annotations. In Lecture Notes in Computer Science (pp. 51–62). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-92089-9_4
  17. Liu, X., Wu, J., Lu, T., Zhang, S., & Wang, G. (2025). SRPL-SFDA: Sam-Guided Reliable Pseudo-Labels For Source-Free Domain Adaptation in medical image segmentation. Neurocomputing, 649, 130749. https://doi.org/10.1016/j.neucom.2025.130749
  18. Zhang, Y., Lv, B., Xue, L., Zhang, W., Liu, Y., Fu, Y., Cheng, Y., & Qi, Y. (2025). SemiSAM+: Rethinking semi-supervised medical image segmentation in the era of foundation models. Medical Image Analysis, 106, 103733. https://doi.org/10.1016/j.media.2025.103733
  19. Zhang, Y., Hu, S., Ren, S., Jiang, C., Cheng, Y., & Qi, Y. (2023). Enhancing the Reliability of Segment Anything Model for Auto-Prompting Medical Image Segmentation with Uncertainty Rectification (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2311.10529
  20. Zhang, Y., Yang, J., Liu, Y., Cheng, Y., & Qi, Y. (2024). SemiSAM: Enhancing Semi-Supervised Medical Image Segmentation via SAM-Assisted Consistency Regularization. In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (pp. 3982–3986). IEEE. 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). https://doi.org/10.1109/bibm62325.2024.10821951
  21. Deng, G., Zou, K., Ren, K., Wang, M., Yuan, X., Ying, S., & Fu, H. (2023). SAM-U: Multi-box Prompts Triggered Uncertainty Estimation for Reliable SAM in Medical Image. In Lecture Notes in Computer Science (pp. 368–377). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-47425-5_33