Fully Automated Segmentation of Subcutaneous Fat in CT Images

Type: MA thesis

Status: finished

Date: June 1, 2022 - December 1, 2022

Supervisors: Felix Denzinger, Leonhard Rist, Felix Durlak, Andreas Maier

Thesis description

Given that obesity is as a major global health issue, as well as the fact that body fat is an important
risk factor for cancer, many cardiovascular and metabolic diseases [1, 2], providing a precise measuring
tool for the distribution of adipose tissue is of high interest. Adipose tissue is also associated with
many physiological functions as the principal energy storage organ and due to its endocrine activity [3].
Computed tomography (CT) and magnetic resonance imaging (MRI) are both utilized to localize and
quantify body fat, however, in contrast to CT, MRI is more challenging with the current segmentation
problem because of the image intensity inhomogeneities [4]. In addition, MRI is slower, more expensive
and thus less clinically available [5].

To the best of our knowledge, all approaches of the most recent publications addressing the problem
of subcutaneous adipose tissue (SAT) segmentation still lack one or more important aspects. The vast
majority of related approaches are only semi-automatic, requiring a carefully chosen user input to
reach their goals [6] or are rather mostly manual [1, 7, 8]. Several convolutional neural network-based
methods [9, 10], together with an active-contour-based method [11] for fully automatic subcutaneous
fat segmentation were already proposed, but they are either limited to the abdominal region or are
only operating on 2D image slices, which is sub-optimal for 3D image data. A novel neural network
architecture was found to have achieved accurate 3D segmentation results on volumetric CT data for
both thorax and abdomen [5]. However, the remaining drawbacks are the mislabeled annotations for
certain thoracic slices and the relatively small training dataset (only 18 images), which will restrict
the model generalizability. Therefore, the contribution of this thesis is intended to fill the gaps in the
previous publications by introducing a fully automatic, more reliable and reproducible framework for
3D segmentation of the abdominal and thoracic SAT in CT images.

Semantic segmentation networks have become a powerful tool for segmenting spatially structured
images and thus play an essential role for biomedical image data. However, since there is no dataset
available with the required ground truth annotations of SAT, these annotation masks have to be
generated first. Manual delineation of the inner and outer contours that are defining SAT on axial
images is an inefficient and time-consuming process, so some semi-automatic algorithms are used to
accelerate the generation of initial segmentation masks for our dataset. The dataset originally consists
of selected CT images from Siemens internal database. In principle, the work of the thesis will be
twofold. In the first phase, active contours (AC) [12, 13] will be used as a baseline algorithm for SAT
segmentation. Nevertheless, several improvement steps, such as finding optimum AC parameters,
helpful preprocessing and having consistent 3D masks [14], need to be implemented to get satisfactory
segmentations. Final annotations can be then used for the training after some manual corrections.
The second phase is to train a deep neural network using the annotated dataset in order to have a
fully automatic 3D SAT segmentation. For this task, nnU-Net [15] as an advanced state-of-the-art
deep learning-based segmentation tool is going to be applied. Furthermore, different training schemes
that rely on anatomical prior knowledge (i.e. two different segmentation networks for thorax and
abdomen) and ground truth-driven patch sampling will be implemented and evaluated.

The thesis will comprise the following work items:
• Literature review of most efficient active contour segmentation algorithms, fully- and semiautomated
segmentation methods for subcutaneous fat in CT images
• Utilization of the improved active contour algorithms besides some manual corrections to generate
an annotated dataset
• Implementation and training of a deep neural network to fully automate SAT segmentation
• Quantitative assessment and evaluation of the developed method
• Encapsulation of the new pipeline into usable MeVisLab modules (www.mevislab.de) for later
development of the company’s current prototype software



[1] Amir A. Mahabadi, Joseph M. Massaro, Guido A. Rosito, Daniel Levy, Joanne M. Murabito,
Philip A. Wolf, Christopher J. O’Donnell, Caroline S. Fox, and Udo Hoffmann. Association of
pericardial fat, intrathoracic fat, and visceral abdominal fat with cardiovascular disease burden:
the framingham heart study. European Heart Journal, 64(7):850–856, 2009. https://academic.
[2] Giuliano Enzi, Mauro Gasparo, Pietro Raimondo Biondetti, Davide Fiore, Marcello Semisa, and
Francesco Zurlo. Subcutaneous and visceral fat distribution according to sex, age, and overweight,
evaluated by computed tomography. The American Journal of Clinical Nutrition, 44(6):739–746,
2009. https://academic.oup.com/ajcn/article/44/6/739/4692311.
[3] Jules Dichamp, Corinne Barreau, ChristopheGuissard, Audrey Carri´ere, Yves Martinez, Xavier
Descombes, Luc Penicaud, Jacques Rouquette, LouisCasteilla, Franck Plouraboue, and Anne
Lorsignol. 3D analysis of the whole subcutaneous adipose tissue reveals a complex spatial network
of interconnected lobules with heterogeneous browning ability. Scientific Reports, 9(1):6684, 2019.
[4] Hans-Peter M¨uller, Florian Raudies, Alexander Unratha, Heiko Neumann, Albert C. Ludolph,
and Jan Kassubek. Quantification of human body fat tissue percentage by MRI: Quantification
of human body fat tissue. NMR in Biomedicine, 24(1):17–2–4, 2011. https://onlinelibrary.
[5] Tiange Liu, Junwen Pan, Drew A. Torigian, Pengfei Xu, Qiguang Miao, Yubing Tong, and
Jayaram K. Udup. ABCNet: A new efficient 3D dense-structure network for segmentation and
analysis of body tissue composition on body-torso-wide CT images. American Association of
Physicists in Medicine, 47(7):2986–2999, 2020. https://doi.org/10.1002/mp.14141.
[6] Robin F. Gohmann, Sebastian Gottschling, Patrick Seitz, Batuhan Temiz, Christian Krieghoff1,
Christian L¨ucke, Matthias Horn, and Matthias Gutberlet. 3D-segmentation and characterization
of visceral and abdominal subcutaneous adipose tissue on CT: influence of contrast medium and
contrast phase. Quantitative Imaging in Medicine and Surgery, 11(2):697–705, 2021. http:
[7] Won G. Kwack, Yun-Seong Kang, Yun J. Jeong, Jin Y. Oh, Yoon K. Cha, Jeung S. Kim, and
Young S. Yoon. Association between thoracic fat measured using computed tomography and lung
function in a population without respiratory diseases. Journal of Thoracic Disease, 11(12):5300–
5309, 2019. http://jtd.amegroups.com/article/view/34021/html.
[8] Yubing Tong, Jayaram K. Udupa, Drew A. Torigian, Dewey Odhner, CaiyunWu, Gargi Pednekar,
Scott Palmer, Anna Rozenshtein, Melissa A. Shirk, John D. Newell, Mary Porteous, Joshua M.
Diamond, Jason D. Christie, and David J. Lederer. Chest fat quantification via CT-based on
standardized anatomy space in adult lung transplant candidates. PLOS ONE, 12(1), 2017. https:
[9] Zheng Wang, Yu Meng, Futian Weng, Yinghao Chen, Fanggen Lu, Xiaowei Liu, Muzhou Hou,
and Jie Zhang. An effective CNN method for fully automated segmenting subcutaneous and
visceral adipose tissue on ct scans. Annals of Biomedical Engineering, 48(1):312–328, 2020.
[10] Sebastian Nowak, Anton Faron, Julian A. Luetkens, Helena L. Geißler, Michael Praktiknjo, Wolfgang
Block, Daniel Thomas, and Alois M. Sprinkart. Fully automated segmentation of connective
tissue compartments for CT-based body composition analysis. Investigative Radiology, 55(6):357–
366, 2020. http://journals.lww.com/10.1097/RLI.0000000000000647.
[11] Scott J. Lee, Jiamin Liu, Jianhua Yao, Andrew Kanarek, Ronald M. Summers, and Perry J.
Pickhardt. Fully automated segmentation and quantification of visceral and subcutaneous fat at
abdominal CT: application to a longitudinal adult screening cohort. The British Journal of Radiology,
91:20170968, 2018. http://www.birpublications.org/doi/10.1259/bjr.20170968.
[12] Fabien Pierre, Mathieu Amendola, Cl´emence Bigeard, Timoth´e Ruel, and Pierre-Fr´ed´eric Villard.
Segmentation with active contours. Image Processing On Line, 11:120–141, 2021. https://www.
[13] Vicent Caselles, Ron Kimmel, and Guillermo Sapiro. Geodesic active contours. IEEE Comput.
Soc. Press, pages 694–699, 1995. http://ieeexplore.ieee.org/document/466871/.
[14] Huiyan Jiang and Qingshui Cheng. Automatic 3D segmentation of CT images based on active
contour models. 11th IEEE International Conference on Computer-Aided Design and Computer
Graphics (CAD/Graphics), pages 540–543, 2009. http://ieeexplore.ieee.org/document/
[15] Fabian Isensee, Paul F. Jaeger, Simon A. A. Kohl, Jens Petersen, , and Klaus H. Maier-Hein. nnUNet:
a self-configuring method for deep learning-based biomedical image segmentation. Nature
Methods, 18(2):203–211, 2021. http://www.nature.com/articles/s41592-020-01008-z.