Index
Robust Image Registration Algorithms for Reference-Based X-Ray Defect Detection in Non-Destructive Testing
In our 2D X-ray non-destructive testing (NDT) pipeline, we use two inspection
strategies: (a) reference-less inspection, which works well on simple parts; and
(b) reference-based inspection, which is used for complex, large parts where
reference-less methods result in false positives and missed defects. Reference-
based detection fundamentally relies on a high-quality ’golden’ image that must
be precisely aligned with the test image for accurate defect detection. However,
current moment-based registration algorithms perform poorly when confronted
with practical imaging variations, including translation, rotation, and slight non-
rigid deformations. Slight changes in perspective (common in X-ray setups due
to varying source-detector distances) are not handled well, resulting in residual
misalignment. These registration failures directly result in critical defects being
missed or false positives being detected.
This thesis will identify and evaluate registration approaches that can handle
rigid transformations and slight scale differences while preserving small defect
artefacts.
Few Shot Writer Identification and Retrieval using Handwritten Primitives
LLM-Based Similarity Search for Industrial Software Test Failures
An LLM Framework for Scalable Software Trace Analysis and Summarization
Mutual Information-Based Segmentation for Unseen Domain Generalization in Digital Pathology
The introduction of automated slide scanners has facilitated the digitization of histopathological samples, enhancing the capabilities of traditional light microscopy by allowing the use of automated image analysis algorithms. Machine learning algorithms have demonstrated great potential in this regard by extrapolating learned characteristics from annotated datasets to unseen data, thus providing valuable assistance to pathologists in their diagnostic work. The performance of these models, however, can be significantly degraded by variations in image characteristics, including differences in scanners used for image acquisition, staining methods, resolution, illumination, and artifacts [1, 2]. These challenges highlight the difficulty of applying trained models across environments, necessitating domain adaption techniques.
Previous studies have already addressed color inconsistencies in histological samples, with calibration slides being one approach to resolving scanner-dependent variations [3]. Further notable pre-processing (-)/ training (⋆) techniques include:
– Data augmentation to simulate variability in the input data (e.g. domain-, spatial transformations) [4,5]
– Image-level domain adaption to align visual features across domains, mitigating distributional discrepancies, e.g. stain normalization to reduce inter-sample/ inter-scanner color variation [5,6]
– Multi-scale processing to capture features at different resolutions [2]
⋆ Heterogeneous dataset training to improve model generalization across multiple sources [7]
⋆ Transfer learning to utilize pre-trained models which is ideal for sparsely annotated data [2]
⋆ Domain-invariant feature learning to ensure robustness to scanner and staining variability [8,9], and in particular adversarial training to reinforce robustness against domain shifts [2]
⋆ Disentangled feature learning to isolate distinct underlying factors of data variations, compelling the network to learn shared statistical components across different domains [5]
This thesis investigates the applicability of a mutual information-based method for feature disentanglement [5] for cross-domain tumor segmentation in histopathology samples. By separating anatomical features from domain-specific variations, we aim for robust scanner-invariant segmentation performance. The objective is to enhance the generalizability of the network and enable direct application to unseen domains without adaptation.
The proposed work comprises the following work items:
– Literature review of device-induced variations in microscopy image data and state-of-the-art methods to address them
– Conceptualization and adaptation of mutual information-based segmentation [5] to address generalization for unseen domains in microscopy image data
– Exploration of targeted augmentation methods for addressing domain shifts in histopathology (e.g. stain augmentation [6])
– Exploration of suitable metrics for evaluating cross-domain generalization performance
– Documentation and presentation of the findings, documentation of code
[1] F. Wilm, M. Fragoso, C. A. Bertram, N. Stathonikos, M. Öttl, J. Qiu, R. Klopfleisch, A. Maier, K. Breininger, and M. Aubreville, “Multi-scanner canine cutaneous squamous cell carcinoma histopathology dataset,” in Bildverarbeitung für die Medizin 2023: Proceedings, German Workshop on Medical Image Computing, Braunschweig, July 2-4, 2023 (T. M. Deserno, H. Handels, A. Maier, K. Maier-Hein, C. Palm, and T. Tolxdorff, eds.), Informatik aktuell, Wiesbaden: Springer Fachmedien Wiesbaden, 2023.
[2] C. L. Srinidhi, O. Ciga, and A. L. Martel, “Deep neural network models for computational histopathology: A survey,” Medical Image Analysis, vol. 67, p. 101813, Jan. 2021.
[3] X. Ji, R. Salmon, N. Mulliqi, U. Khan, Y. Wang, A. Blilie, B. G. Pedersen, K. D. Sørensen, B. P. Ulhøi, R. Kjosavik, E. A. M. Janssen, M. Rantalainen, L. Egevad, P. Ruusuvuori, M. Eklund, and K. Kartasalo, “Physical Color Calibration of Digital Pathology Scanners for Robust Artificial Intelligence Assisted Cancer Diagnosis.”
[4] M. Balkenhol, N. Karssemeijer, G. J. S. Litjens, J. Van Der Laak, F. Ciompi, and D. Tellez, “H&E stain augmentation improves generalization of convolutional networks for histopathological mitosis detection,” in Medical Imaging 2018: Digital Pathology (M. N. Gurcan and J. E. Tomaszewski, eds.), (Houston, United States), p. 34, SPIE, Mar. 2018.
[5] Y. Bi, Z. Jiang, R. Clarenbach, R. Ghotbi, A. Karlas, and N. Navab, “MI-SegNet: Mutual Information-Based US Segmentation for Unseen Domain Generalization,” Feb. 2024. arXiv:2303.12649.
[6] M. Macenko, M. Niethammer, J. S. Marron, D. Borland, J. T. Woosley, Xiaojun Guan, C. Schmitt, and N. E. Thomas, “A method for normalizing histology slides for quantitative analysis,” in 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, (Boston, MA, USA), pp. 1107–1110, IEEE, June 2009.
[7] M. Aubreville, F. Wilm, N. Stathonikos, K. Breininger, T. A. Donovan, S. Jabari, M. Veta, J. Ganz, J. Ammeling, P. J. Van Diest, R. Klopfleisch, and C. A. Bertram, “A comprehensive multi-domain dataset for mitotic figure detection,” Scientific Data, vol. 10, p. 484, July 2023.
[8] A. Moyes, “A Novel Method For Unsupervised Scanner-Invariance With DCAE Model.”
[9] M. W. Lafarge, J. P. W. Pluim, K. A. J. Eppenhof, P. Moeskops, and M. Veta, “Domain-adversarial neural networks to address the appearance variability of histopathology images,” 2017. arXiv:1707.06183.
Modernizing and Extending miRNexpander: A Web-Based Interface for Network Expansion of Molecular Interactions in Biomedical Research
RPA-Bots zur Prozessautomatisierung im Workflow Management der DATEV eG
Context-Aware Emotion Recognition from Pictures using Frozen CLIP
ThesisProposalVinzenzDeworEvaluation of SHViT for volumetric Semantic Segmentation in Industrial CT Scans
Industrial computed tomography (iCT) is a widely applied tool in non-destructive testing, material analysis, quality control, and metrology. Semantic segmentation of industrial CT data plays a central role in these applications by enabling quality inspection, material differentiation and part separation [1]. While convolutional neural networks (CNNs) have traditionally performed well in segmentation tasks by capturing local structures, their limited ability to model long-range dependencies poses challenges in complex 3D datasets.
Transformer-based models have recently emerged as promising alternatives. By dividing the input into patches and using self-attention mechanisms, transformers can model global dependencies. However, early vision transformers had difficulties capturing spatial structure and learning from limited data. The Swin Transformer was one of the first models to address these issues by introducing a hierarchical structure and shifted windows, combined with an inductive bias that improves generalization on small datasets [2].
Despite these advances, transformers remain resource intensive. New models such as the Shifted-window Hierarchical Vision Transformer (SHViT) aim to reduce computational costs while maintaining performance. SHViT extends the Swin architecture and improves spatial modeling and efficiency through a refined hierarchical structure with shifted windows [3].
This thesis focuses on the implementation and evaluation of a volumetric SHViT model for 3D semantic segmentation. The model is tested on a real-world dataset of industrial CT scans of boxed shoes, which includes several segmentation tasks: separating the shoes from their surroundings and identifying individual components such as the insole, outsole, and upper [4]. Typically for industrial CT data, the dataset is limited in size. Yet, its structural variability makes it an interesting benchmark for assessing model generalization. As evaluation metric for the class imbalanced segmentation dataset primarily the F1-score is used. The network is also evaluated in terms of memory and computational resource use.
The SHViT model will be compared to a CNN-based baseline, evaluating accuracy, robustness, and computational efficiency in the context of 3D industrial segmentation. While the study aims to inform the selection of neural architectures for iCT applications, its conclusions are limited using a single dataset. Nonetheless, SHViT shows potential for broader use in iCT, as it could enable the efficient application of transformer-based models to volumetric segmentation across diverse industrial datasets.
Literature
| [1] | S. a. G. P. a. V. P. a. D. W. Bellens, „Machine learning in industrial X-ray computed tomography–a review,“ CIRP Journal of Manufacturing Science and Technology, pp. 324–341, 2024. |
| [2] | Z. a. L. Y. a. C. Y. a. H. H. a. W. Y. a. Z. Z. a. L. S. a. G. B. Liu, „Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,“ in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, CA, 2021. |
| [3] | S. a. R. Y. Yun, „SHViT: Single-Head Vision Transformer with Memory Efficient Macro Design,“ Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5756–5767, 2024. |
| [4] | M. Leipert, G. Herl, J. Stebani, S. Zabler und A. Maier, „Three Step Volumetric Segmentation for Automated Shoe Fitting,“ e-Journal of Nondestructive Testing, Bd. 28, Nr. 3, 2023. |