Shallow Networks and AI Explainability in Context of vDCE for Breast MRI

Type: MA thesis

Status: running

Supervisors: Julian Hoßbach, Tri-Thien Nguyen, Sebastian Bickelhaupt (Universitätsklinikum Erlangen), Andrzej Liebert (Universitätsklinikum Erlangen), Andreas Maier

Introduction
Dynamic Contrast-Enhanced MRI (DCE-MRI) is a key tool in breast cancer diagnostics, offering detailed vascular information essential for identifying and evaluating tumors [1]. However, ontrast agents used in this process can pose risks, particularly for patients with kidney issues or allergies [2]. Virtual Dynamic Contrast Enhancement (vDCE) provides a promising alternative by enerating contrast-enhanced images computationally, removing the need for actual contrast agents [3]. This thesis explores improving vDCE through smaller, more interpretable neural and dynamic network architectures, focusing on better resource explainability.

Motivation
Smaller, shallow neural networks offer several advantages, such as:
• Lower Computational Needs: Shallow models require less processing power, making them ideal for limited-resource environments [4].
• Localized Analysis: These models can focus on specific regions, such as individual breast areas, which improves diagnostic accuracy [5].
• Enhanced Transparency: Simpler architectures provide greater clarity in their decision-making process, making results easier for clinicians to interpret [6].
• Since insights derived from one breast often do not affect the other, this localized and interpretable approach is particularly well-suited for breast MRI analysis.

Objectives
• Develop and Test Shallow Neural Network Models for vDCE: Design models that balance accuracy with simplicity [4].
• Implement Explainability Tools: LIME, and SHAP to make model decisions clearer to clinicians [6].
• Explore the Efficiency-Accuracy Trade-off: Examine how smaller models can maintain diagnostic accuracy while being computationally efficient.
• Explore patch-based approaches:
Methodology
• New Network Architectures: Investigate linear models, dynamic convolutions, hypernetworks, and attention mechanisms to optimize shallow networks.
• Explainability Methods: Apply, LIME, and SHAP for clearer decision insights. This includes exploring the impact of patch size on capturing spatial context and analyzing the significance of specific input features on model’s decision making.
• Performance Metrics: Compare shallow models against deeper models for accuracy, efficiency, and clarity. The evaluation will include a metrics-based comparison with state-of-the-art methods [3] and a reader study involving radiologists to assess the clinical relevance and usability of the outputs

References
1. Turnbull, L.W. (2009), Dynamic contrast-enhanced MRI in the diagnosis and management of breast cancer. NMR Biomed., 22: 28-39. https://doi.org/10.1002/nbm.1273
2. Andreucci M, Solomon R, Tasanarong A. Side effects of radiographic contrast media: pathogenesis, risk factors, and prevention. Biomed Res Int. 2014;2014:741018. doi: 10.1155/2014/741018. Epub 2014 May 11. PMID: 24895606; PMCID: PMC4034507.
3. Schreiter, Hannes, et al. “Virtual dynamic contrast enhanced breast MRI using 2D U-Net Architectures.” medRxiv (2024): 2024-08.
4. Prinzi, F., Currieri, T., Gaglio, S. et al. Shallow and deep learning classifiers in medical image analysis. Eur Radiol Exp 8, 26 (2024). https://doi.org/10.1186/s41747-024-00428-2
5. van der Velden, B.H.M., Janse, M.H.A., Ragusi, M.A.A. et al. Volumetric breast density estimation on MRI using explainable deep learning regression. Sci Rep 10, 18095 (2020). https://doi.org/10.1038/s41598-020-75167-6
6. Gulum, M.A.; Trombley, C.M.; Kantardzic, M. A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging. Appl. Sci. 2021, 11, 4573. https://doi.org/10.3390/app11104573