Pattern Recognition Lab with Outstanding Success at BVM 2026
The Pattern Recognition Lab at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) reports exceptional success in this year’s submission round for “Bildverarbeitung für die Medizin (BVM) 2026”. Out of 19 submitted contributions, 18 have been accepted, corresponding to an impressive acceptance rate of 94.7 percent. The accepted works span the full breadth of the lab’s research activities in medical imaging, image and signal processing, and AI-driven decision support in medicine.
In total, 14 full papers and 4 abstracts from the lab have been accepted to the conference. All 14 submitted full papers were accepted, underlining both the scientific quality and the consistency of the lab’s output. Among these accepted contributions, 5 were selected for oral presentation and 9 for presentation as posters. The four accepted abstracts will all be presented in the poster sessions, providing visibility for several emerging research directions and collaborations.
The selection of five contributions for oral presentation in BVM’s competitive program is a particular highlight. The paper “BE-WISE: Breast MRI Evaluation with Weakly-Informed Slice-level Explanation” advances the use of explainable AI in breast MRI, providing slice-level interpretability that can help clinicians better understand and trust algorithmic decisions. “Self-supervised dual-domain Swin transformer for sparse-view CT reconstruction” explores cutting-edge transformer architectures for improving CT image reconstruction from limited data, with direct impact on dose reduction and image quality. With “Filter2Noise-4D: An Interpretable Framework for Zero-Shot 4D Low-Dose CT Denoising”, the lab presents an innovative, interpretable approach to denoising dynamic CT data without the need for retraining on each new protocol, combining physical insight with modern deep learning. The paper “Vision-Language Models for Structured Medical Report Generation: Towards Consistent and Reliable Chest X-Ray Reporting” shows how large vision–language models can support standardized, structured reporting in radiology, aiming to reduce variability and increase reliability in clinical documentation. Finally, “Parameter-Efficient Finetuning of Foundational Models for Text-Guided X-Ray Image Segmentation” demonstrates how foundational segmentation models can be adapted efficiently for text-guided tasks, lowering the barrier for deploying powerful image understanding tools in specialized clinical scenarios.
The strong oral program is complemented by a broad portfolio of accepted poster contributions. In X-ray and CT imaging, the lab will present work on “Prediction of Patient and Mobile C-arm System Orientation in Orthopedic Trauma Procedures” as well as “Opportunistic Breast Cancer Risk Stratification from Low-Dose Chest CT Using Multiple Instance Learning”, both illustrating how imaging data collected in routine workflows can be turned into actionable guidance and risk assessment. Methodological contributions such as “Differentiable Approximate Truncation Robust CBCT Reconstruction via Known Operator Learning” and “AI-Based Dual-Domain Framework for Gridline Suppression in Digital Radiography” highlight the lab’s long-standing expertise in combining physical modeling with modern machine learning to solve challenging reconstruction and image quality problems.
Further accepted posters underline the lab’s impact across modalities and clinical questions. “A Comparative Study of Deep Learning Models for Brain Metastases Autosegmentation” and “Hybrid vessel wall segmentation for assisted annotation in CT Angiography” contribute to automated segmentation in neuro-oncology and cardiovascular imaging, easing annotation efforts and enabling large-scale studies. In MRI, “Automatic Patient Positioning Control and Correction on MRI Localizer Images” tackles the practical but crucial problem of reliable patient positioning, aiming to improve robustness and efficiency in everyday clinical workflows.
The lab’s growing activities in multiscale and multimodal imaging are visible in works such as “BigReg: An Efficient Registration Pipeline for High-Resolution X-Ray and Light-Sheet Fluorescence Microscopy” and “DINO Adapted to X-Ray (DAX) – Foundation Models for Intraoperative X-Ray Imaging”, which connect advanced registration and foundation models to high-resolution and intraoperative imaging scenarios. “Anomaly Detection in Thoracic CT” adds to the toolbox of unsupervised and weakly supervised methods for detecting unexpected findings, a key ingredient for scalable AI deployment when fully annotated datasets are scarce. The abstract “DRACO: Differentiable Reconstruction for Arbitrary CBCT Orbits” showcases ongoing work on differentiable reconstruction frameworks that can flexibly accommodate non-standard imaging geometries.
Beyond imaging alone, the lab also emphasizes cross-modal and audio-visual research directions. “Audio–Vision Contrastive Learning for Phonological Class Recognition” bridges medical signal processing and computer vision by exploring contrastive learning setups that fuse auditory and visual information, building on the lab’s long tradition in medical audio and speech analysis.
“Securing 18 acceptances from 19 submissions and five oral presentations in such a competitive environment is a fantastic achievement by our team,” says Prof. Andreas Maier, head of the Pattern Recognition Lab at FAU. “The results reflect not only our methodological strength in reconstruction, image analysis, and interpretable AI, but also the close collaboration with clinical partners across modalities and disciplines. We are looking forward to lively discussions at BVM 2026 and to further strengthening the link between foundational research and clinical translation.”
With this strong presence at BVM 2026, the Pattern Recognition Lab once again underlines its leading role in the fields of medical image and signal processing, interpretable machine learning, and data-driven medicine. The breadth of accepted work, from foundational models and reconstruction theory to clinical applications in oncology, cardiology, radiology, and speech, illustrates the lab’s mission to advance medical imaging and digital health for the benefit of patients and healthcare professionals alike.

