Index
Characterizing ultraound images through breast density related features using traditional and deep learning approaches
Breast cancer is the most common cancer in women worldwide with accounting for almost a quarter of all new female cancer cases [ 1 ]. In order to improve the chances of recovery and reduce mortality, it is crucial to detect and diagnose it as early as possible. Mammography is the standard treatment when screening for breast cancer. While mammography images have an important role in cancer diagnosis, it has been shown that the sensitivity of these images decreases with a high mammographic density [2].The mammographic density (MD) refers to the amount of fibroglandular tissue in the breast in proportion to the amount of fatty tissue. MD is an established risk factor for breast cancer. The general risk for developing breast cancer is increased with a higher density. Women with a density of 25% or higher are twice as likely to develop breast cancer, with 75% even five times, compared to women with a MD of less than 5% [3]. In addition there is a possibility that in dense breast a tumor may be masked on a mammogram [2]. Therefore it is necessary to consider the breast’s density when screening for breast cancer. Several studies aimed at supporting and improving breast cancer diagnosis with computer aided systems and feature evaluation and such studies have taken the MD into consideration when evaluating mammography images [4][5].
In order to detect those tumors that are masked on mammography or support inconclusive findings, often an additional ultrasound (US) is conducted on women with high MD [6]. However US images underlie a high inter-observer variability. Computer-aided diagnosis aims to develop a method to analyze US images and support diagnosis with the aim of reducing this variability. The approach of this thesfa is to transfer and ad just the methods designed by Häberle et al. [4] used for characterizing 2-D mammographic images in order to use them on 3-D ultrasound images while only focusing on features correlating with the MD.
Additionally, more features will be generated using deep leaming, as most of the recent computer-aided diagnosis tools do not rely on traditional methods anymore. Over the last years deep learning has become the standard when working in medical imaging and several studies have shown a promising perf ormance when working with breast ultrasound images [7][8].
U sing both traditional and deep leaming methods for extracting features aims to improve the classification of possibly cancerous tissue by building a reliable set of features which characterize the MD of the patient. Furthermore, the traditional features may help to interpret those generated through deep learning approaches, in turn, the latter may help to show the benefit of using deep leaming when analyzing medical images.
This thesis will cover the following points:
• Literature review of mammographic density as a risk factor for breast cancer and ultrasound as an additional screening method
• Extraction and evaluation of a variety of automated features in ultrasound images using traditional and deep leaming approaches
• Analyzing the relationship of the extracted features with the mammographic density
References
] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: a cancer joumal for clinicians, vol. 68, no.6,pp.394-424,2018.[2] P. E. Freer, “Mammographie breast density: impact on breast cancer risk and implications for screening,” Radiographics : a review publication of the Radiological Society of NorthAmerica, Inc, vol. 35, no. 2, pp. 302-315, 2015.
[3] V. A. McCormack, “Breast density and parenchymal pattems as markers of breast cancer risk: A meta-analysis,” Cancer Epidemiology and Prevention Biomarkers, vol. 15, no. 6, pp. 1159-1169, 2006.
[4] L. Häberle, F. Wagner, P. A. Fasching, S. M. Jud, K. Heusinger, C. R. Loehberg, A. Hein, C. M. Bayer, C. C. Hack, M. P. Lux, K. Binder, M. Elter, C. Milnzenmayer, R. Schulz-Wendtland, M. Meier-Meitinger, B. R. Adamietz, M. Uder, M. W. Beckmann, and T. Wittenberg, “Characterizing mammographic images by using generic texture features,” Breast cancer research: BCR, vol. 14, no. 2, 2012.
[5] M. Tan, F. Aghaei, Y. Wang, and B. Zheng, “Developing a new case based computer-aided detection scheme and an adaptive cueing method to improve performance in detecting mammographic lesions,” Physics in medicine and biology, vol. 62, no. 2, pp.358-376,2017.
[6] L. Häberle, C. C. Hack, K. Heusinger, F. Wagner, S. M. Jud, M. Uder, M. W. Beckmann, R. Schulz-Wendtland, T. Wittenberg, and P. A. Fasching, “Using automated texture features to determine the probability for masking ofa tumor on mammography, but not ultrasound,” Europeanjoumal of medical research, vol. 22, no. 1, 2017.
[7] H. Tanaka, S.-W. Chiu, T. Watanabe, S. Kaoku, and T. Yamaguchi, “Computer-aided diagnosis system for breast ultrasound images using deep Ieaming,” Physics in medicine and biology, vol. 64, no. 23, 2019.
[8] M. H. Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, A. K. Davison, R. Marti, and H. Y. Moi, “Automated breast ultrasound lesions detection using convolutional neural networks,” IEEEjoumal of biomedical and health informatics, vol.22,no.4,pp. 1218-1226,2018.
Comparing and Aggregating Face Presentation Attack Detection Methods
Dynamic Technology trend monitoring from unstructured data using Machine learning
New technologies are enablers for product and process innovations. However, in the multitude of available technologies on the market, identifying the relevant and new technologies for one’s own company and one’s own problem is associated with a high effort. ROKIN as a technology platform offers a key component for the rapid identification of new technologies and thus for the acceleration of innovation processes in companies. For this purpose, new technologies are identified in the Internet, profiles are created and these are made available to companies via an online platform. Companies are provided with suitable solution proposals for their specific problem.
ROKIN automates the individual steps for this process, from data collection via web crawler, through the matching process, to the visualization of information in technology profiles. A central point in this process is detecting newest technological trends in the market in the collected data. This allows companies to keep up with upcoming technological shifts.
Due to the recent successes with so-called “Transformer Models” (e.g. “Bidirectional Encoder Representations from Transformers” (BERT)), new possibilities in the recognition and understanding of texts are opening up like never before. These models were trained domain-independent using general information from Wikipedia and book corpus. An open question is how these approaches perform in a domain-specific context like engineering. Can the sentiment understanding of such algorithms be used to improve existing classical NLP keyword analysis and topic modelling for trend detection? Especially the early onset of a trend, where little evidence through keywords is given a sentiment understanding using transformer based approaches might help. The goal is therefore to implement and extend existing classical NLP algorithms with Transformer models and use the new model to identify trends in big amount of engineering text documents.
Tasks:
• Literature research and analysis of existing NLP tools for trend detection (transformers as well as classic keyword analysis and topic modelling approaches).
• Setting up an information database (via Web-Crawling and Google Search APIs) for a given problem out of the engineering environment of a company (topic provided by ROKIN).
• Semantic modelling and analysis of the information database for identifying technology trends by different approaches of NLP algorithms.
• Strengths and weaknesses evaluation in respect to the created algorithms and based on the individual results.
• Development of a strategy or approach for an ideal trend detection strategy. Specific to early stage trend detection.
• Evaluation and optimization of the algorithms and documentation of the results.
Machine-Learning-Based Status Monitoring of HVDC Converter Stations
Detection and semantic segmentation of human faces in low resolution thermal images
The detection and isolation of persons with elevated body core temperatures contribute to the reduction of the speed with which certain respiratory diseases are spreading throughout the population. Contactless temperature measurements with thermal cameras are used for fast screening of persons and for selecting those that should be checked more closely with accurate medical thermometers. The only accessible source for temperature information is typically only the face in public areas and its exposed skin segments. The offset between the person’s actual body core temperature and the skin temperature varies in a wide range. It depends on the ambient conditions, what the person did in the last few minutes and where he/ she came from, on the person’s body characteristics and of course it depends on the location of the observed skin segment, not to speak of any technical limitations from the camera itself. Currently the Bosch Sicherheitssysteme Engineering GmbH is investigating the dependency of the body core temperature offset on the location of the measured skin segment.
In this master thesis, a reasonable detection and semantic segmentation of the human face on a thermal image should be investigated. In order to do this, the following points shall be addressed:
– Literature research for state-of-the-art methods of face detection within thermal images
– Identification of the most effective method for the exact face position detection within a preselected image area, including prototypical implementation (e.g. with OpenCV)
– Preparation and annotation of thermal image data for the usage of face detection
– Comparison of a neural network based method for face detection and classical machine learning approaches for the application to low resolution thermal images, eventually including prototypical implementation
– Identification of the most promising methods to correlate a hotspot pixel location with a face section (chin, cheek, nose, forehead, etc.), including prototypical implementations
– Optional: Identification of the most promising methods to detect certain facial occlusions like facial hair (forehead, beard) or glasses, including prototypical implementation
As input for the investigation, existing field test data is available for analysis, but further dedicated lab experiments will certainly be required.
Quality Assurance and Clinical Integration of a Prototype for Intelligent 4DCT Sequence Scanning
With 1.8 million deaths worldwide in 2018 (353.000 deaths in Europe in 2012 [1]) lung cancer is the most deadly cancer disease [2]. The prognosis for lung cancer are quite poor, only 15% of the men (21% of the women) survive 5 years [3] .
75% of these patients receive radiation therapy [4]. Nevertheless, it is challenged by breathing-related movements which lead to artifacts possibly causing both incorrect diagnosis and dosimetric errors of the therapy itself. As a result, the target volume might not be covered by the scheduled amount of radiation.
Computed tomography (CT) is an essential part of the treatment planning process. While 3D CT images can correctly display static anatomy, 4D imaging additionally records the breathing cycles and synchronizes it retrospectively with the acquired images. Thus the results of a 4D CT scan are time-resolved data of a 3D volume.
4D CT imaging with fixed beam on/ off slots and irregular breathing can lead to missing data coverage in desired breathing states, known as a violation of the data sufficiency condition (DSC). [5] The caused artifacts are expressed in the image as a strong blurring of anatomical structures and requires in the worst case a second treatment planning CT and as a consequence a delay of patient treatment as well as additional dose.
The idea of the intelligent 4D CT (i4DCT)-algorithm is to improve data coverage to reduce these artifacts. During the initial learning period the patient-specific respiratory cycle is analyzed. For every slice the scanner generates data for a whole respiratory cycle. Based on an online comparison of reference and current breathing curves during data acquisition, the selection of beam on/off periods is adjusted. If the data sufficiency condition is fulfilled the scan is stopped and the table moves to the next z-position. This process is repeated until the targeted scan area is covered. [5]
To ensure the effective, safe and reliable use of i4DCT-algorithm in everyday clinical practice quality assurance must be given.
The aim of this Master’s thesis is to develop and perform quality tests. Subsequently results are evaluated and interpreted to draw conclusions for clinical application.
Phantom measurements are performed with the CIRS Motion Thorax Phantom (CIRS, Norfolk, USA). This is a lung-equivalent solid epoxy rod containing a soft tissue target (symbolizing the tumor). In order to get close to realistic circumstances, the target can be moved by CIRS Motion Software according to an artificially created, irregular breathing pattern in three dimensions. The breathing curve is tracked by the Varian ‘respiratory gating for scanners’ system (RGSC, Varian Medical Systems, Inc. Palo Alto, CA). It consists of two main parts. All measurements are performed on SOMATOM go Open Pro CT scanner (Siemens Healthcare, Forchheim, Germany).
The tests include different reconstruction methods (Maximum Intensity Projection and amplitude/ phase based reconstruction), investigating the dimensions of the artificial tumor in every body axis, verifying the match of recorded breathing pattern in RGSC and CT as well as testing the limits of RGSC/ i4DCT algorithm.
Refernces
[1] J. Ferlay, E. Steliarova-Foucher, J. Lortet-Tieulent, S. Rosso, J. W. W. Coebergh, H. Comber, D. Forman und F. I. Bray, „Cancer incidence and mortality patterns in Europe: Estimates for 40 countries in 2012,“ European Journal of Cancer, Bd. 49, Nr. 6, pp. 1374-1403, 2013.
[2] World Health Organisation (WHO), „Cancer,“ 2018. [Online]. Available:
https://www.who.int/news-room/fact-sheets/detail/cancer. [Zugriff am 10 09 2020].
[3] Zentrum für Krebsregisterdaten, „Lungenkrebs (Bronchialkarzinom),“ 17 12 2019. [Online]. Available:
https://www.krebsdaten.de/Krebs/DE/Content/Krebsarten/Lungenkrebs/lungenkrebs_node.html. [Zugriff am 10 09 2020].
[4] R. Werner, „Strahlentherapie atmungsbewegter Tumoren: Bewegungsfeldschätzung und Dosisakkumulation anhand von 4D-Bilddaten,“ Springer Vieweg, 2013, p. 1.
[5] R. Werner, T. Sentker, F. Madesta, T. Gauer und C. Hofman, „Intelligent 4D CT sequence scanning (i4DCT): Concept and performance,“ Medical Physics, Nr. 46, pp. 3462-3474, 22 May 2019.
CITA: An Android-based Application to Evaluate the Speech of Cochlear Implant Users
Cochlear Implants (CI) are the most suitable devices for severe and profound deafness when hearing aids do not improve sufficiently speech perception. However, CI users often present altered speech production and limited understanding even after hearing rehabilitation. People suffering from severe to profound deafness may experience different speech disorders such as decreased intelligibility, changes in terms of articulation, slower speaking rate. among others. Though hearing outcome is regularly measured after cochlear implantation, speech production quality is seldom assessed in outcome evaluations. This project aims to develop an Android application suitable to analyze the speech production and perception of CI users.
A Robust Intrusive perceptual audio quality assessment based on convolutional neural network
Abstract
The goal of a perceptual audio quality predictor is to capture the auditory
experience of listeners and score the audio excerpts without creating
massive workload for the listeners. Methods such as PESQ and ViSQOL
serve as computational proxy for subjective listening tests. ViSQOLAudio,
the Virtual Speech Quality Objective Listener in audio mode, is
a signal-based, full-reference, intrusive metric that models human audio
quality perception using a gammatone spectro-temporal measure of
similarity between a reference and a degraded audio signal. Here we
proposed an end-to-end model based on convolutional neural network
with self-attention mechanism to predict the perceived quality of audio
with a clean reference signal and improve its robustness to adversarial
examples. The model is trained and evaluated on a corpus of an unencoded
48kHz audio dataset up to 12 hours labeled by the ViSQOLAudio
to derive a Mean Opinion Score (MOS) for each excerpt.
Keywords: perceptual audio quality assessment, MOS, ViSQOLAudio,
full reference, deep learning, self-attention, end-to-end model
Introduction
Digital audio systems and services use codec to encode and decode a digital
data stream or signal in order to minimize bandwidth and maximize users’
quality of experience. Different codec brings with different quality degradation
and artefacts, which affect the perceived audio quality. To evaluate
the codec performance, a MOS score is used by asking listeners to assess the
quality of an audio clip on a scale from one to five. This method could be tedious
and expensive and several computational approaches to automate these
tests are designed to predict MOS. Intrusive method, i.e. with a full reference
signal, calculates a perceptually weighted distance between the clean (unencoded)
reference and degraded (coded) signals. PEAQ, POLQA, PEMO-Q and
1
Figure 1: A representation of ViSQOLAudio
ViSQOLAudio are four major full-reference models. ViSQOLAudio, which
will be the focus and inspiration of this Thesis, is an adapted model of ViSQOL
to function as a perceptual audio quality prediction model. ViSQOLAudio
introduces a series of novel improvements and has gained outstanding performance
against POLQA, PEAQ and PEMO-Q. Inspired and motivated by
ViSQOLAudio, we designed an end-to-end deep learning network to predict
MOS using gammatone spectrograms as input, which resembles the algorithm
of ViSQOLAudio and improves prediction performance and robustness to
adversarial examples.
Background
The process of ViSQOLAudio consists of four phases: preprocessing, pairing,
comparison and finally the similarity measure to a MOS mapping. In the
preprocessing stage, the middle channel of reference and degraded signals is
extracted, misalignment caused by zero padding is removed and then gammatone
spectrograms are calculated on both signals. Gammatone filters are a
popular linear approximation to the filtering performed by human auditory
system, and the audio signal is visualized as a time-varying distribution of
energy in frequency, which is one way of describing the information brains get
from the ears via auditory nerves. Conventional spectrogram differs from how
the sound is analyzed by ears. Ears’ frequency sub-bands get wider for higher
frequencies whereas the usual spectrogram keeps a constant bandwidth across
all frequency channels.
The pairing step first segments spectrograms of reference signals into a
sequence of patches of size 32 frequency bands times 30 frames (i.e., a 32 x 30
matrix). Then the patches of the same size in degraded signals are iteratively
extracted to calculate reference-degraded distances and create a set of most
similar reference-degraded patch pairs. The similarity of each pair is then
calculated in the comparison step and averaged across all the frequency bands.
In the last step the mean frequency band similarity score is mapped to MOS
using a support vector regression model.
Dataset
The dataset used by Microsoft team for full reference speech quality evaluation
is 16kHz sampled, 2010 clean speech samples up to 20 seconds long with 3
utterances, approximately 33 hours in total. The speech data of the attentional
2
Siamese neural networks are collected from 11 different databases from the
POLQA pool with 5000 clean reference signals up to 16 hours. Building a
dataset between 10 to 30 hours would be adequate as well as efficient for
unbiased computation in our case.
We collected 48kHz sampled mono audio files to build our clean reference
dataset, which consists of 4500 music excerpts and 900 speech excerpts and
each excerpt is exactly 8 seconds long and totally adds up to 12 hours. The
reference audio clips are then encoded and decoded by HE-AAC and AAC
codec with the following sequence of bitrates: 16, 20, 24, 32, 48, 64, 96, and
128 kbps: 16, 20, 24, 32, and 48 kbps was encoded with HE-AAC and 64, 96,
and 128 kbps with plain AAC. Coding above 128 kbps will be hardly audibly
different from un-coded signals and coding below 16 kbps will greatly reduce
the audio quality and make no sense in common practical applications. 43,200
degraded signals are generated from 5400 clean reference signals and expected
to be labelled ideally as 8 different quality intervals corresponding to coded
bitrates.
The reference and degraded signals are then paired and aligned and later
fed into ViSQOLAudio to get MOS as their corresponding ground truth labels
instead of human annotated MOS scores. Gammatone spectrograms
of reference and degraded signals are extracted based on the MATLAB implementation
of gammatone spectrogram presented by Daniel Ellis. This
MATLAB implementation is running inside the ViSQOLAudio. The gammatone
spectrogram of the audio signal is calculated with a window size of 80ms,
hop size of 20ms, and 32 frequency bands from 50Hz up to half of the sample
rate. The gammatone spectrograms of reference and degraded signals are
paired and concatenated channel-wise in the shape of [channels, time frames,
frequency bands] and later used as inputs to our neural network.
Architecture
The existing deep learning architectures in speech and audio quality assessment
generally consist of CNN blocks, RNN blocks or attention layers. The
model proposed by Microsoft team consists of several convolutional layers,
batch normalization layers, max pooling layers and fully connected layers with
dropout. Other models such as attentional Siamese neural networks proposed
by Gabriel Mittag and Sebastian Moeller adds LSTM layers and attention
layers to include the influence of the features from long time sequence.
Self-attention was proposed by Google in 2017 to apply in natural language
processing without RNN. The essence of attention mechanism is when
human sight or hearing detects an item, it will not scan the entire scene or
excerpt end to end, rather it focuses on a specific portion according to their
needs. Attention mechanism was designed to dynamically create a weights
matrix between keys and queries. This weight matrix could be applied to the
feature maps or original input spatial-wise or channel-wise. Interesting and
promising applications of attention mechanism in computer vision involves
refined classification, image segmentation and image caption. Compared to
conventional classification tasks implemented by CNN, attention module adds
a parallel branch consisting of successive down-sampling and up-sampling
3
operations to gain a wider receptive field. The attention map increases the
range of receptive field from the lower layers and highlights the core features
that are crucial to classification tasks.
Apart from conventional convolutional layers, attention layers as well
as squeeze-and-excitation net (SENet) will be attempted and utilized in our
model. While normal self-attention layers are applied spatial-wise, SENet
is a special attention mechanism, which applies different weights channelwise.
The appropriate design and parameters of the architecture remain to be
discussed and tested in further work.
Conclusion
Although state-of-the-art methods have proposed a few intrusive deep learning
models learning from waveform, spectrogram or other transformed features,
most of models were trained on 16kHz speech signals and none of those
use gammatone spectrograms as input. Our model is the first end-to-end
neural network trained on the gammatone spectrograms derived from 48kHz
audio dataset predicting MOS. Perceptual audio quality assessment is still a
brand new and promising application of deep learning algorithms and the
versatility and impact of this work is huge.
References
1. Michael Chinen, Felicia S. C. Lim, Jan Skoglund, Nikita Gureev, Feargus
O’Gorman and Andrew Hines, “ViSQOL v3: an open source production
ready objective speech and audio metric”, arXiv:2004.09584v1[eess.AS] 20.
Apr. 2020
2. Colm Sloan, Naomi Harte, Damien Kelly, Anil C. Kokaram and Andrew
Hines, “Objective assessment of perceptual audio quality using ViSQOLAudio”,
IEEE Transactions on Broadcasting, Vol. 63. No. 4. Dec. 2017
3. Hannes Gamper, Chandan K. A. Reddy, Ross Cutler, Ivan J. Tashev, and
Johannes Gehrke, “Intrusive and non-intrusive perceptual speech quality
assessment using a convolutional neural network”, 2019 IEEEWorkshop
on Applications of Signal Processing to Audio and Acoustics
4. Gabriel Mittag, and Sebastian Moeller “Full-reference speech quality
estimation with attentional siamese neural networks”, 978-1-5090-6631-
5/20/$31.00 2020 IEEE, ICASSP 2020
4
Synergistic Radiomics and CNN Features for Multiparametric MRI Lesion Classification
Breast cancer is the most frequent cancer among women, impacting 2.1 million women each year. In order to assist in diagnosing patients with breast cancer, to measure the size of the existing breast tumors and to check for tumors in the opposite breast, breast magnetic resonance imaging (MRI) can be applied. MRI enjoys the advantages that patients won’t suffer from ionizing radiation during the examination, and it can capture the entire breast volume. In the meanwhile, machine learning methods have been proved to accurately classify images by assigning the probability score to estimate the likelihood of an image belonging to a certain category in many fields. With the properties mentioned above, this project aims to investigate whether applying machine learning approaches to breast tumor MRI can provide an accurate prediction on the tumor type (malignant or benign) for the diagnosing purpose.
Dilated deeply supervised networks for hippocampus segmentation in MR
Tissue loss in the hippocampi has been heavily correlated with the progression of Alzheimer’s Disease (AD). The shape and structure of the hippocampus are important factors in terms of early AD diagnosis and prognosis by clinicians. However, manual segmentation of such subcortical structures in MR studies is a challenging and subjective task. In this paper, we investigate variants of the well known 3D U-Net, a type of convolution neural network (CNN) for semantic segmentation tasks. We propose an alternative form of the 3D U-Net, which uses dilated convolutions and deep supervision to incorporate multi-scale information into the model. The proposed method is evaluated on the task of hippocampus head and body segmentation in an MRI dataset, provided as part of the MICCAI 2018 segmentation decathlon challenge. The experimental results show that our approach outperforms other conventional methods in terms of different segmentation accuracy metrics.