In this thesis, we aim to investigate multi-modal fusion techniques for breast lesion malignancy detection. In clinical settings, a radiologist acquires different image sequences (mammograms, US, and MRI) to precisely identify the lesion type. Relying on one modality has the risk of missing tumors or false diagnosis. However, combining information from different modalities can improve significantly the detection rate.
For example, the evaluation of mammograms on relatively dense breasts is known to be difficult, whereas ultrasound is then used to provide the information needed for a diagnosis. In other case, ultrasound is inconclusive, while mammograms offer clarity. There have been many computer-aided detection (CAD) models proposed that use either mammograms, e.g. or sonograms. However, there are relatively few studies that consider both modalities simultaneously for breast cancer diagnostic. With having this in mind, we assume that deep neural networks can also incorporate complementary features from two domains to improve the breast cancer detection rate.