Towards Adaptation of Foundational Models for Prompt-guided X-Ray Image Segmentation – MT Final Talk by Maeen Abdelbadea Nasralla Alikarrar
Join us for the final presentation of a Master’s thesis focused on adapting foundation models to radiographic image segmentation. Previous medical-adapted SAM variant models such as MedSAM demonstrate strong generalization across medical modalities, but were exposed to relatively small number of X-ray images compared to other modalities like CT and MRI. This thesis integrates different text encoders to MedSAM to support both text prompting along box-guided segmentation and evaluates its adaptation on chest and lower-limb radiographs.
Our work investigates full and parameter-efficient fine-tuning strategies to understand trade-offs between computational efficiency and performance. Through systematic experiments, we analyze how unfreezing different model components affects segmentation quality and compare bounding-box and text prompts in terms of accuracy and cross-domain robustness. The findings offer empirical guidance on effective parameter allocation and prompt modality selection for leveraging foundation models in X-ray imaging.