Invited Talk: Yi Gu (NAIST, Japan) – Fine-Grained Musculoskeletal Analysis from a Plain X-ray Image, Fri March 14th 2025, 10 AM CET
It’s a great pleasure to welcome Yi Gu from the Nara Institute of Science and Technology, Japan to our lab!
Title: Fine-Grained Musculoskeletal Analysis from a Plain X-ray Image
Date: Fri March 14th 2025, 10 AM CET
Location: https://fau.zoom-x.de/j/69017118587?pwd=bjCPBxrmnmL2kbKIaTaqTstda8qI0R.1
Abstract: Musculoskeletal disorders pose a growing challenge worldwide, underscoring the need for accessible and cost-effective diagnostic solutions. While quantitative computed tomography (QCT) and dual-energy X-ray absorptiometry (DXA) provide accurate assessments of bone mineral density (BMD) and muscle metrics, these modalities remain limited in availability and affordability. In this talk, I present our works for musculoskeletal analysis that rely on plain X-ray images—a far more ubiquitous and economical modality. By learning to synthesize 2D metric distribution maps (subsets of CT information), the proposed methods enable fine-grained musculoskeletal analysis, including BMD estimation, muscle mass and volume estimation, and 3D bone reconstruction, in an efficient way. Experimental results on clinical data demonstrate that our methods closely approximate the quantitative information conventionally obtained from QCT or DXA. This outcome paves the way for scalable, opportunistic screening and continued monitoring of musculoskeletal health, significantly reducing both clinical burden and cost while retaining substantial diagnostic value. Paper: https://link.springer.com/chapter/10.1007/978-3-031-72104-5_1
Short Bio: Yi Gu is a 3rd-year PhD candidate at the joint labs of Imaging-based Computational Biomedicine (ICB, advised by Prof. Yoshinobu Sato) and Biomedical Imaging Intelligence (BII, advised by Prof. Yoshito Otake), Nara Institute of Science and Technology. His research focuses on medical data understanding, including anatomy recognition and quantification from CT and X-ray images using generative models and multimodal learning.