Index
A Robust Intrusive perceptual audio quality assessment based on convolutional neural network
Abstract
The goal of a perceptual audio quality predictor is to capture the auditory
experience of listeners and score the audio excerpts without creating
massive workload for the listeners. Methods such as PESQ and ViSQOL
serve as computational proxy for subjective listening tests. ViSQOLAudio,
the Virtual Speech Quality Objective Listener in audio mode, is
a signal-based, full-reference, intrusive metric that models human audio
quality perception using a gammatone spectro-temporal measure of
similarity between a reference and a degraded audio signal. Here we
proposed an end-to-end model based on convolutional neural network
with self-attention mechanism to predict the perceived quality of audio
with a clean reference signal and improve its robustness to adversarial
examples. The model is trained and evaluated on a corpus of an unencoded
48kHz audio dataset up to 12 hours labeled by the ViSQOLAudio
to derive a Mean Opinion Score (MOS) for each excerpt.
Keywords: perceptual audio quality assessment, MOS, ViSQOLAudio,
full reference, deep learning, self-attention, end-to-end model
Introduction
Digital audio systems and services use codec to encode and decode a digital
data stream or signal in order to minimize bandwidth and maximize users’
quality of experience. Different codec brings with different quality degradation
and artefacts, which affect the perceived audio quality. To evaluate
the codec performance, a MOS score is used by asking listeners to assess the
quality of an audio clip on a scale from one to five. This method could be tedious
and expensive and several computational approaches to automate these
tests are designed to predict MOS. Intrusive method, i.e. with a full reference
signal, calculates a perceptually weighted distance between the clean (unencoded)
reference and degraded (coded) signals. PEAQ, POLQA, PEMO-Q and
1
Figure 1: A representation of ViSQOLAudio
ViSQOLAudio are four major full-reference models. ViSQOLAudio, which
will be the focus and inspiration of this Thesis, is an adapted model of ViSQOL
to function as a perceptual audio quality prediction model. ViSQOLAudio
introduces a series of novel improvements and has gained outstanding performance
against POLQA, PEAQ and PEMO-Q. Inspired and motivated by
ViSQOLAudio, we designed an end-to-end deep learning network to predict
MOS using gammatone spectrograms as input, which resembles the algorithm
of ViSQOLAudio and improves prediction performance and robustness to
adversarial examples.
Background
The process of ViSQOLAudio consists of four phases: preprocessing, pairing,
comparison and finally the similarity measure to a MOS mapping. In the
preprocessing stage, the middle channel of reference and degraded signals is
extracted, misalignment caused by zero padding is removed and then gammatone
spectrograms are calculated on both signals. Gammatone filters are a
popular linear approximation to the filtering performed by human auditory
system, and the audio signal is visualized as a time-varying distribution of
energy in frequency, which is one way of describing the information brains get
from the ears via auditory nerves. Conventional spectrogram differs from how
the sound is analyzed by ears. Ears’ frequency sub-bands get wider for higher
frequencies whereas the usual spectrogram keeps a constant bandwidth across
all frequency channels.
The pairing step first segments spectrograms of reference signals into a
sequence of patches of size 32 frequency bands times 30 frames (i.e., a 32 x 30
matrix). Then the patches of the same size in degraded signals are iteratively
extracted to calculate reference-degraded distances and create a set of most
similar reference-degraded patch pairs. The similarity of each pair is then
calculated in the comparison step and averaged across all the frequency bands.
In the last step the mean frequency band similarity score is mapped to MOS
using a support vector regression model.
Dataset
The dataset used by Microsoft team for full reference speech quality evaluation
is 16kHz sampled, 2010 clean speech samples up to 20 seconds long with 3
utterances, approximately 33 hours in total. The speech data of the attentional
2
Siamese neural networks are collected from 11 different databases from the
POLQA pool with 5000 clean reference signals up to 16 hours. Building a
dataset between 10 to 30 hours would be adequate as well as efficient for
unbiased computation in our case.
We collected 48kHz sampled mono audio files to build our clean reference
dataset, which consists of 4500 music excerpts and 900 speech excerpts and
each excerpt is exactly 8 seconds long and totally adds up to 12 hours. The
reference audio clips are then encoded and decoded by HE-AAC and AAC
codec with the following sequence of bitrates: 16, 20, 24, 32, 48, 64, 96, and
128 kbps: 16, 20, 24, 32, and 48 kbps was encoded with HE-AAC and 64, 96,
and 128 kbps with plain AAC. Coding above 128 kbps will be hardly audibly
different from un-coded signals and coding below 16 kbps will greatly reduce
the audio quality and make no sense in common practical applications. 43,200
degraded signals are generated from 5400 clean reference signals and expected
to be labelled ideally as 8 different quality intervals corresponding to coded
bitrates.
The reference and degraded signals are then paired and aligned and later
fed into ViSQOLAudio to get MOS as their corresponding ground truth labels
instead of human annotated MOS scores. Gammatone spectrograms
of reference and degraded signals are extracted based on the MATLAB implementation
of gammatone spectrogram presented by Daniel Ellis. This
MATLAB implementation is running inside the ViSQOLAudio. The gammatone
spectrogram of the audio signal is calculated with a window size of 80ms,
hop size of 20ms, and 32 frequency bands from 50Hz up to half of the sample
rate. The gammatone spectrograms of reference and degraded signals are
paired and concatenated channel-wise in the shape of [channels, time frames,
frequency bands] and later used as inputs to our neural network.
Architecture
The existing deep learning architectures in speech and audio quality assessment
generally consist of CNN blocks, RNN blocks or attention layers. The
model proposed by Microsoft team consists of several convolutional layers,
batch normalization layers, max pooling layers and fully connected layers with
dropout. Other models such as attentional Siamese neural networks proposed
by Gabriel Mittag and Sebastian Moeller adds LSTM layers and attention
layers to include the influence of the features from long time sequence.
Self-attention was proposed by Google in 2017 to apply in natural language
processing without RNN. The essence of attention mechanism is when
human sight or hearing detects an item, it will not scan the entire scene or
excerpt end to end, rather it focuses on a specific portion according to their
needs. Attention mechanism was designed to dynamically create a weights
matrix between keys and queries. This weight matrix could be applied to the
feature maps or original input spatial-wise or channel-wise. Interesting and
promising applications of attention mechanism in computer vision involves
refined classification, image segmentation and image caption. Compared to
conventional classification tasks implemented by CNN, attention module adds
a parallel branch consisting of successive down-sampling and up-sampling
3
operations to gain a wider receptive field. The attention map increases the
range of receptive field from the lower layers and highlights the core features
that are crucial to classification tasks.
Apart from conventional convolutional layers, attention layers as well
as squeeze-and-excitation net (SENet) will be attempted and utilized in our
model. While normal self-attention layers are applied spatial-wise, SENet
is a special attention mechanism, which applies different weights channelwise.
The appropriate design and parameters of the architecture remain to be
discussed and tested in further work.
Conclusion
Although state-of-the-art methods have proposed a few intrusive deep learning
models learning from waveform, spectrogram or other transformed features,
most of models were trained on 16kHz speech signals and none of those
use gammatone spectrograms as input. Our model is the first end-to-end
neural network trained on the gammatone spectrograms derived from 48kHz
audio dataset predicting MOS. Perceptual audio quality assessment is still a
brand new and promising application of deep learning algorithms and the
versatility and impact of this work is huge.
References
1. Michael Chinen, Felicia S. C. Lim, Jan Skoglund, Nikita Gureev, Feargus
O’Gorman and Andrew Hines, “ViSQOL v3: an open source production
ready objective speech and audio metric”, arXiv:2004.09584v1[eess.AS] 20.
Apr. 2020
2. Colm Sloan, Naomi Harte, Damien Kelly, Anil C. Kokaram and Andrew
Hines, “Objective assessment of perceptual audio quality using ViSQOLAudio”,
IEEE Transactions on Broadcasting, Vol. 63. No. 4. Dec. 2017
3. Hannes Gamper, Chandan K. A. Reddy, Ross Cutler, Ivan J. Tashev, and
Johannes Gehrke, “Intrusive and non-intrusive perceptual speech quality
assessment using a convolutional neural network”, 2019 IEEEWorkshop
on Applications of Signal Processing to Audio and Acoustics
4. Gabriel Mittag, and Sebastian Moeller “Full-reference speech quality
estimation with attentional siamese neural networks”, 978-1-5090-6631-
5/20/$31.00 2020 IEEE, ICASSP 2020
4
Synergistic Radiomics and CNN Features for Multiparametric MRI Lesion Classification
Breast cancer is the most frequent cancer among women, impacting 2.1 million women each year. In order to assist in diagnosing patients with breast cancer, to measure the size of the existing breast tumors and to check for tumors in the opposite breast, breast magnetic resonance imaging (MRI) can be applied. MRI enjoys the advantages that patients won’t suffer from ionizing radiation during the examination, and it can capture the entire breast volume. In the meanwhile, machine learning methods have been proved to accurately classify images by assigning the probability score to estimate the likelihood of an image belonging to a certain category in many fields. With the properties mentioned above, this project aims to investigate whether applying machine learning approaches to breast tumor MRI can provide an accurate prediction on the tumor type (malignant or benign) for the diagnosing purpose.
Dilated deeply supervised networks for hippocampus segmentation in MR
Tissue loss in the hippocampi has been heavily correlated with the progression of Alzheimer’s Disease (AD). The shape and structure of the hippocampus are important factors in terms of early AD diagnosis and prognosis by clinicians. However, manual segmentation of such subcortical structures in MR studies is a challenging and subjective task. In this paper, we investigate variants of the well known 3D U-Net, a type of convolution neural network (CNN) for semantic segmentation tasks. We propose an alternative form of the 3D U-Net, which uses dilated convolutions and deep supervision to incorporate multi-scale information into the model. The proposed method is evaluated on the task of hippocampus head and body segmentation in an MRI dataset, provided as part of the MICCAI 2018 segmentation decathlon challenge. The experimental results show that our approach outperforms other conventional methods in terms of different segmentation accuracy metrics.
Multimodal Breast Cancer Detection using a Fusion of Ultrasound and Mammogram Features
In this thesis, we aim to investigate multi-modal fusion techniques for breast lesion malignancy detection. In clinical settings, a radiologist acquires different image sequences (mammograms, US, and MRI) to precisely identify the lesion type. Relying on one modality has the risk of missing tumors or false diagnosis. However, combining information from different modalities can improve significantly the detection rate.
For example, the evaluation of mammograms on relatively dense breasts is known to be difficult, whereas ultrasound is then used to provide the information needed for a diagnosis. In other case, ultrasound is inconclusive, while mammograms offer clarity. There have been many computer-aided detection (CAD) models proposed that use either mammograms, e.g. or sonograms. However, there are relatively few studies that consider both modalities simultaneously for breast cancer diagnostic. With having this in mind, we assume that deep neural networks can also incorporate complementary features from two domains to improve the breast cancer detection rate.
Classification of Breast Density in Mammograms Using Deep Machine Learning
The female breast is mainly composed of adipose and fibroglandular tissue. In a mammogram, fibroglandular tissue appears brighter than fatty tissue and is therefore called “dense”. Current clinical protocol requires radiologists to not only detect possible cancer tumors but also to evaluate breast density in a mammogram \cite{Wockel.2018}, which corresponds to the relative amount of fibroglandular tissue. Breast density is an important characteristic of a mammogram because it is a breast cancer risk marker and it affects the mammogram’s sensitivity. The evaluation is done via classification into one of the four categories defined by the “Breast Imaging – Reporting and Data System” guidelines from the American College of Radiology (ACR BI-RADS).
In this thesis, the application of convolutional neural networks for the classification of breast density in mammograms is investigated. Several neural network architectures and training methods are tested and the results compared against classical machine learning methods. A strategy for the removal of possibly noisy labels in the training data is presented and an analysis of inter-observer variability among radiologists is carried out. It is found that the algorithm with the best classification performance provides breast density assessment on level with an average experienced radiologist.
COPD Classification in CT Images Using a 3D Convolutional Neural Network
Chronic obstructive pulmonary disease (COPD) is a lung disease that is not fully reversible and one of the leading causes of morbidity and mortality in the world. Early detection and diagnosis of COPD can increase the survival rate and reduce the risk of COPD progression in patients. Currently, the primary examination tool to diagnose COPD is spirometry. However, computed tomography (CT) is used for detecting symptoms and sub-type classification of COPD. Using different imaging modalities is a difficult and tedious task even for physicians and is subjective to inter-and intra-observer variations. Hence, developing methods that can automatically classify COPD versus healthy patients is of great interest. In this thesis we propose a 3D deep learning approach to classify COPD and emphysema using volume-wise annotations only. We also investigate the impact of transfer learning on the classification of emphysema using knowledge transfer from a pre-trained COPD classification model.
Tumor Detection & Classification in Breast Cancer Histology Images using Deep Neural Networks
Among females, breast cancer is one of the most frequently diagnosed cancers and the leading causes of cancer-related death both worldwide, and in more economically developed countries. Early diagnosis significantly increases treatment success, since the treatment is more difficult and uncertain when the disease is detected at advanced stages. For this purpose, proper analysis of histology images is essential. Histology is the study of the microanatomy of cells, tissues, and organs as seen through a microscope.
One of the most common type of Histology images used as the basis of contemporary cancer diagnosis for at least a century is Hematoxylin and eosin (H&E) stained breast histol- ogy microscopy images[4]. During this diagnosis procedure, trained specialists evaluate both overall and local tissue organization of the images. However, due to the large amount of data and the complexity of the images, this task becomes very time consuming and non-viable. Therefore, the development of software tools for automatic detection and diagnostic tools is a promising prospect in this field. This subject has been a rather active field of research, and thus, the automatic detection of breast cancer based on histology images is part of the ICIAR 2018 challenge on BreAst Cancer Histology (BACH) challenge. This challenge consists of two parts; classification and segmentation.
The aim of this thesis is to first design a classifier network, which can recognize types of breast cancer. Then, using another network, we will try to classify the WSIs and perform segmentation on the images. Afterwards, we want to investigate how weakly-supervised training can affect our results on both image-wise (first part) and pixel-wise labeled images (second part). For this purpose, we will start with reproducing the results of the winning paper, which is the state of the art. Then we try to build the rest on top of that.
Deep Learning-based Denoising of Mammographic Images using Physics-driven Data Augmentation
Mammography uses low-energy X-rays to screen the human breast and is used by radiologists to detect breast cancer. Due to its complexity, a radiologist needs an impeccable image quality. For this reason, the possibility of using deep learning to denoise Mammograms to help radiologists detect breast cancer more easily will be examined. In this thesis, we aim to investigate and develop different deep learning methods for mammogram denoising.
A physically motivated noise model will be simulated on the ground truth images to generate training data. Thereafter the variance stabilizing Anscombe transformation is applied to create white Gaussian noise. Using these data, different network architectures are trained and examined. For training, a novel loss function will be designed which helps to preserve fine image details crucial for breast cancer detections.
The effectiveness of this loss function is investigated, and its performance is compared again to other state-of-the-art loss functions. It can be shown that the proposed method outperforms state of the art algorithm like BM3D for mammography denoising. Finally, it will be shown that the network is able to remove not only simulated, but also real noise.
Solution to extend the Field of View of Computed Tomography using Deep Learning approaches
Deep learning has been successfully applied in various applications of computed tomography (CT). Due to limited
detector size and low dose requirements, the problem of data truncation is essentially present in CT. The reconstructed images from such limited field-of-view (FoV) projections suffer from cupping artifacts inside the FoV and distortion or missing of anatomical structures that are outside the FoV [1]. One practical approach to solve the data truncation problem is to apply an extrapolation technique that increases the FoV, then apply an artifact removal technique. The water cylinder extrapolation based reconstruction [2] is a promising method that estimates the projections outside the scan field-of-view (SFoV) by using the knowledge from the projections inside the SFoV. Alternatively, the linear extrapolation technique is the simplest extrapolation technique that always increases the FoV without using any prior information, however, artifacts are still visible on the reconstructed image. Recently, Fourni´e et al. [3] have proposed a deep learning based method “Deep EFoV” to extend the FoV of CT images. First, the FoV is increased by linearly extrapolating the outer channels in the sinogram space. The reconstructed image from this extended FoV sinogram produces artifacts. Finally, the U-net model is used to remove the artifacts in the reconstructed image. The reconstructed image from a neural network model might affect the anatomical structures that are inside the SFoV. To compensate this effect, a standard algorithm “HDFoV” is used where projections inside the SFoV and projections from the neural network model that are outside the FoV are merged.
The aim of the master’s thesis will be to integrate “Deep EFoV” and “HDFoV” algorithms in the C#-based proprietary
reconstruction tool “ReconCT” developed by Siemens Healthineers. The result from the integrated algorithms needs
to be compared with the result from only the “Deep EFoV” algorithm. Another goal is to evaluate and improve the
proposed deep learning model in “Deep EFoV” for the CT FoV extension. The model needs to be improved w. r. t.
tweaking architecture, adapting parameters or even using a different architecture. The dataset and software provided
by Siemens Healthineers will be used in the thesis. The final software needs to be integrated into the “ReconCT” and
has to be presented to the supervisors.
The thesis will include the following points:
• Review of the state-of-the-art method and deep learning approaches to extend the FoV
• Comparison of the proposed method “Deep EFoV” with the integrated “Deep EFoV” and “HDFoV” method
• Improvement and simplification of the proposed deep learning model in “Deep EFoV”
• Integration of the proposed model in the reconstruction tool.
References
[1] Y. Huang, L. Gao, A. Preuhs, and A. Maier, “Field of View Extension in Computed Tomography Using Deep
Learning Prior,” in Bildverarbeitung f¨ur die Medizin: Algorithmen – Systeme – Anwendungen, pp. 186–191,
Springer, 2020.
[2] J. Hsieh, E. Chao, J. Thibault, B. Grekowicz, A. Horst, S. McOlash, and T. J. Myers, “A novel reconstruction
algorithm to extend the CT scan field-of-view,” Medical Physics, vol. 31, no. 9, pp. 2385–2391, 2004.
[3] ´ E. Fourni´e, M. Baer-Beck, and K. Stierstorfer, “CT field of view extension using combined channels extension
and deep learning methods,” in International Conference on Medical Imaging with Deep Learning – Extended
Abstract Track, (London, United Kingdom), 08–10 Jul 2019.
Geometric Deep Learning for Multifocal Diseases
Diseases are classied as multifocal if they are relating to or arising from many foci. They are present in various
medical disciplines, e.g. multifocal atrial tachycardia [1], breast cancer [2] or multifocal motor neuropathy [3].
However, analyzing diseases with multiple centers brings several challenges for conventional deep learning ar-
chitectures. On a technical side, it is complex to handle a varying number of centers which have no unique
sequence. From a medical view, it is important to model structures and relationships between the foci. The grid
structure used in convolutional neural networks cannot handle non-regular neighborhoods. A suitable approach
for this task would be to convert the data into graph structures, where the nodes describe the properties of the
foci and the edges model their mutual relationships. With geometric deep learning, it is possible to learn from
graph structures. It is an emerging eld of research with many possible applications, e.g. classifying documents
in citation graphs or analyzing molecular structures [4]. There also exist several medical applications, e.g. for
analysis of parcinson’s disease [5] or artery segmentation [6]. This thesis aims to investigate the applicability of
this method for relatively small graphs coming from multifocal diseases. The networks are trained to predict
time to events of failure as a metric for the severeness of the disease. Dierent geometric layer architectures,
such as Graph-Attention-Networks [7] and Dierential Pooling [8], are investigated and compared to the per-
formance of a conventional neural network. As we aim to create explicable models, it is intended to provide
visualizations of salient sub-graphs and features of the results. In addition to that, methods to incorporate prior
knowledge from the medical domain into the training process are tested to improve the speed of convergence
and strengthen the medical validity of the predictions. In the end, the networks are tested on liver data.
Summary:
1. Transfer multifocal diseases to meaningful graph structures
2. Provide conventional neural network for time to event regression as baseline
3. Investigate and tune dierent geometric deep learning architectures
4. Visualize salient graph structures
References
[1] Jane F. Desforges and John A. Kastor. Multifocal Atrial Tachycardia. New England Journal of Medicine,
322(24):1713{1717, jun 1990.
[2] John Boyages and Nathan J Coombs. Multifocal and Multicentric Breast Cancer: Does Each Focus Matter?
Article in Journal of Clinical Oncology, 23:7497{7502, 2005.
[3] Eduardo Nobile-Orazio. Multifocal motor neuropathy. Journal of Neuroimmunology, 115(1-2):4{18, apr
2001.
[4] Michael Bronstein, Joan Bruna, Yann Lecun, Arthur Szlam, and Pierre Vandergheynst. Geometric Deep
Learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4):18{42, 2017.
[5] Xi Zhang, Lifang He, Kun Chen, Yuan Luo, Jiayu Zhou, and Fei Wang. Multi-View Graph Convolutional
Network and Its Applications on Neuroimage Analysis for Parkinson’s Disease. AMIA … Annual Symposium
proceedings. AMIA Symposium, 2018:1147{1156, 2018.
[6] Jelmer M. Wolterink, Tim Leiner, and Ivana Isgum. Graph Convolutional Networks for Coronary Artery
Segmentation in Cardiac CT Angiography. In Lecture Notes in Computer Science (including subseries
Lecture Notes in Articial Intelligence and Lecture Notes in Bioinformatics), volume 11849 LNCS, pages
62{69. Springer, oct 2019.
[7] Petar Velickovic, Arantxa Casanova, Pietro Lio, Guillem Cucurull, Adriana Romero, and Yoshua Bengio.
Graph attention networks. 6th International Conference on Learning Representations, ICLR 2018 – Con-
ference Track Proceedings, pages 1{12, 2018.
[8] Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. Hierarchi-
cal Graph Representation Learning with Dierentiable Pooling. Advances in Neural Information Processing
Systems, 2018-Decem:4800{4810, jun 2018.