Index
Multimodal Speech MRI
Multimodal Aphasia Detection
DistNeural networks for hearing aid processing
Towards Autonomous Knowledge Evolution: A Self-Evolving Knowledge Graph-Based Retrieval Framework for Domain-Specific Intelligent Systems
LLM-Based Similarity Search for Industrial Software Test Failures
An LLM Framework for Scalable Software Trace Analysis and Summarization
Radiology Report Classification
Evaluate few-shot detection on VinDR-CXR
Accurate localization of thoracic abnormalities in chest X-ray images remains a major challenge due to the limited
availability of large-scale, finely annotated datasets. Few-shot learning has recently emerged as a promising strategy to
address this problem by enabling models to generalize to unseen categories with only a small number of labeled
examples. In this work, we propose an improved few-shot localization approach for VinDr-CXR images by leveraging
the DINO-DETR model, a transformer-based detection framework with self-supervised pretraining. Our method
adapts DINO-DETR to the few-shot setting through task specific fine-tuning and optimization strategies designed to
improve feature alignment between support and query samples. Experimental results demonstrate
that the proposed method achieves competitive localization accuracy compared to baseline approaches, while reducing
the reliance on large annotated datasets. Although certain predictions remain imperfect, particularly in cases with subtle
or overlapping pathologies, the approach shows clear potential for scaling to broader medical imaging applications.
This study highlights both the opportunities and limitations of applying state-of-the-art transformer-based detection
architectures to few-shot medical image localization and suggests directions for future improvements, such as data
augmentation and cross-domain pretraining.
Hierarchy-Aware Deep Learning for Tironian Notes Recognition
Deep Learning-Based Classification and Explainability of Cytomegalovirus Encephalitis in Longitudinal MRI Data
This Master Project focuses on developing and evaluating advanced deep learning methodologies for the automated detection and classification of Cytomegalovirus (CMV)-induced encephalitis using clinical Magnetic Resonance Imaging (MRI) data.
Motivation and Goal
CMV encephalitis is a challenging condition to diagnose, and advanced, non-invasive computational methods are required to assist clinical decision-making. The primary goal is to leverage the temporal and multi-modal information within longitudinal MR scans to classify the presence or stage of inflammation (encephalitis).
Data and Scope
The project utilizes a unique, high-quality, pre-selected longitudinal dataset comprising MRI scans from approximately 300 patients, with an average of six scan time points and multiple MR sequences available per visit.
Key Tasks and Research Questions
- Literature Review: Conduct a targeted literature review of similar projects focused on MR classification, specifically those dealing with longitudinal data (e.g., the BraTS Challenge: Predicting the Tumor Response During Therapy) and methods for medical image classification and prediction.
- Model Development: Adapt and implement state-of-the-art deep learning architectures (e.g., 3D Convolutional Neural Networks, Recurrent Neural Networks, or hybrid models) suitable for processing longitudinal and multi-sequence volumetric data.
- Explainability (XAI): A critical component of the project is the integration of Explainable AI techniques (e.g., Grad-CAM, Saliency Mapping, or LRP). The student will implement and evaluate these methods to highlight which anatomical regions or temporal patterns contribute most significantly to the model’s classification decision, thereby increasing clinical trust and interpretability.
- Evaluation: The core research question is whether robust classification performance can be achieved on the available data, considering potential constraints such as fewer advanced sequences or less acute disease stages compared to published reference literature. Performance will be measured using metrics like AUC, sensitivity, and specificity.