Index

Thrombus Detection in Non-Contrast Head CT using Graph Deep Learning

Thesis Description
Stroke is a severe cerebrovascular disease and one of the major causes of death and disability worldwide [1].
For patients su ering from acute stroke, rapid diagnosis and immediate execution of therapeutic measures are
crucial for a successful recovery. In clinical routine, Non-Contrast Computed Tomography (NCCT) is typically
acquired as a rst-line imaging tool to identify the type of the stroke. In case of an acute ischemic infarct,
appropriate therapy planning requires an accurate detection and localization of the occluding blood clot. An
automated detection system would decrease the probability to miss an obstruction, save time and improve the
overall clinical outcome.
Several methods have been proposed to detect large vessel occlusion (LVO) using enhanced CT data like CT
angiography (CTA) [2, 3, 4]. CTA is mainly used in addition to NCCT and enables accurate evaluation of the
occlusion [5]. Nevertheless, studies have shown that the thrombus which causes the occlusion can be detected in
NCCT images due to its abnormal high density structure [6]. Classi cation from NCCT data can be achieved
by using Convolutional Neural Networks (CNNs) [7]. However, LVOs account for only 24% to 46% of acute
ischemic strokes [8]. Recent approaches for automated intracranial thrombus detection in NCCT are based on
Random Forest classi cation or CNNs [9, 10]. The results are promising, but further improvement is required
to ensure utility in clinical routine.
This thesis aims to achieve higher reliability in detecting the thrombus on NCCT data, assuming clot localization
in the entire cerebrovascular system. More speci cally, the goal is to build and improve upon an
existing detection model which applies a 2D U-Net to the slices of a volumetric dataset, consisting of multiple
channels that had been extracted from the raw CT dataset. The locations of the 15 local maxima with the
highest probability in the resulting prediction map are used as potential candidates for the nal prediction
of the thrombus location. The model to be developed shall classify each candidate (as clot / no clot) while
comprehensively considering all candidates found in the patient as well as corresponding regions on the opposite
hemisphere, as this is considered crucial context for the decision. To this end, a region of interest is extracted
around each candidate position and its opposite position obtained by mirroring at the brain mid plane. Each
such region is considered a node and connected with others to form a graph that describes all regions of interest
in a patient. As such, the problem is formulated as a (partial) node classi cation and graph neural network
models will be investigated to solve it.
In summary, this thesis will comprise the following work items:
ˆ Literature research of state-of-the-art methods for automated thrombus detection
ˆ Extraction of suitable regions of interest based on previously detected clot candidates
ˆ Design and implementation of a (graph) neural network architecture for joint classi cation of all clot
candidates in a patient
ˆ Investigation of multiple graph structures and model architectures
Master Thesis Antonia Popp
ˆ Optimization and evaluation of the deep learning model
References
[1] Walter Johnson, Oyere Onuma, Mayowa Owolabi, and Sonal Sachdev. Stroke: a global response is needed.
Bulletin of the World Health Organization, pages 94:634{634A, 2016.
[2] Sunil A. Sheth, Victor Lopez-Rivera, Arko Barman, James C. Grotta, Albert J. Yoo, Songmi Lee,
Mehmet E. Inam, Sean I. Savitz, and Luca Giancardo. Machine learning-enabled automated determination
of acute ischemic core from computed tomography angiography. Stroke, 50(11):3093{3100, 2019.
[3] Matthew T. Stib, Justin Vasquez, Mary P. Dong, Yun Ho Kim, Sumera S. Subzwari, Harold J. Triedman,
Amy Wang, Hsin-Lei Charlene Wang, Anthony D. Yao, Mahesh Jayaraman, Jerrold L. Boxerman, Carsten
Eickho , Ugur Cetintemel, Grayson L. Baird, and Ryan A. McTaggart. Detecting large vessel occlusion at
multiphase CT angiography by using a deep convolutional neural network. Radiology, page 200334, 2020.
[4] Midas Meijs, Frederick J. A. Meijer, Mathias Prokop, Bram van Ginneken, and Rashindra Manniesing.
Image-level detection of arterial occlusions in 4D-CTA of acute stroke patients using deep learning. Medical
image analysis, 66:101810, 2020.
[5] Michael Knauth, Rudiger von Kummer, Olav Jansen, Stefan Hahnel, Arnd Dor
er, and Klaus Sartor.
Potential of CT angiography in acute ischemic stroke. American journal of neuroradiology, 18(6):1001{
1010, 1997.
[6] G. Gacs, A. J. Fox, H. J. Barnett, and F. Vinuela. CT visualization of intracranial arterial thromboembolism.
Stroke, 14(5):756{762, 1983.
[7] Manon L. Tolhuisen, Elena Ponomareva, Anne M. M. Boers, Ivo G. H. Jansen, Miou S. Koopman, Renan
Sales Barros, Olvert A. Berkhemer, Wim H. van Zwam, Aad van der Lugt, Charles B. L. M. Majoie,
and Henk A. Marquering. A convolutional neural network for anterior intra-arterial thrombus detection
and segmentation on non-contrast computed tomography of patients with acute ischemic stroke. Applied
Sciences, 10(14):4861, 2020.
[8] Robert C. Rennert, Arvin R. Wali, Je rey A. Steinberg, David R. Santiago-Dieppa, Scott E. Olson, J. Scott
Pannell, and Alexander A. Khalessi. Epidemiology, natural history, and clinical presentation of large vessel
ischemic stroke. Neurosurgery, 85(suppl 1):S4{S8, 2019.
[9] Patrick Lober, Bernhard Stimpel, Christopher Syben, Andreas Maier, Hendrik Ditt, Peter Schramm, Boy
Raczkowski, and Andre Kemmling. Automatic thrombus detection in non-enhanced computed tomography
images in patients with acute ischemic stroke. Visual Computing for Biology and Medicine, 2017.
[10] Aneta Lisowska, Erin Beveridge, Keith Muir, and Ian Poole. Thrombus detection in (ct brain scans using
a convolutional neural network. In Margarida Silveira, Ana Fred, Hugo Gamboa, and Mario Vaz, editors,
Bioimaging, BIOSTEC 2017, pages 24{33. SCITEPRESS – Science and Technology Publications Lda,
Setubal, 2017.

Deep Learning-based motion correction of free-breathing diffusion-weighted imaging in the abdomen

Since diffusion is particularly disturbed in tissues with high cell densities such as tumors, diffusion-weighted imaging (DWI) constitutes an essential tool for the detection and characterization of lesions in modern MRI-based diagnostics. However, despite the great influence and frequent use of DWI, the image quality obtained is still variable, which can lead to false diagnoses or costly follow-up examinations.

A common way to increase the signal-to-noise ratio (SNR) in MR imaging is to repeat the acquisition several times, i.e. use multiple number of excitations (NEX). The final image is then calculated by ordinary averaging. While the single images are relatively unaffected by bulk motion due to the short acquisition time, relative motion between the excitations and subsequent averaging will lead to motion blurring in the final image. One way to mitigate this is to perform prospective gating (also known as triggering) using a respiratory signal. However, triggered acquisitions come at the cost of significantly increased scan time. Retrospective gating (also known as binning) constitutes an alternative approach in which data is acquired continuously and subsequently assigned to discrete motion states. The drawback of this approach is that there is no guarantee that data is collected for a given slice within the target motion state. In previous works, mapping of the images from other motion states onto the target motion state was achieved by using a motion model given by an additional navigator acquisition.

In recent years, deep learning has shown great potential in the field of MRI in a wide variety of applications. The goal of this thesis is the development of a deep learning-based algorithm which performs navigator-free registration of DW images given a respiratory signal only. Missing data for certain motion states as well as the inherently low SNR of DW images constitute the main challenges of this work. Successful completion of this work promises significant improvements in image quality for diffusion-weighted imaging in motion-sensitive body regions such as the abdomen.

Deep Learning-based Pitch Estimation and Comb Filter Construction

Typically a clean speech consists of two components, a locally periodic component and a stochastic component. If a speech signal only has a stochastic component, the difference between the enhanced signal applied with the corresponding ideal ratio mask and the clean speech signal is barely perceivable. However, if a speech has a perfect periodic component, then the enhanced signal applied with the corresponding ideal ratio mask is affected by the inter-harmonic noise.
A comb filter based on the speech signal’s pitch period is able to attenuate noise between the pitch harmonics. Thus, a robust pitch estimate is of fundamental importance. In this work, a deep learning-based method for robust pitch estimation in noisy environments will be investigated.

Characterizing ultraound images through breast density related features using traditional and deep learning approaches

Breast cancer is the most common cancer in women worldwide with accounting for almost a quarter of all new female cancer cases [ 1 ]. In order to improve the chances of recovery and reduce mortality, it is crucial to detect and diagnose it as early as possible. Mammography is the standard treatment when screening for breast cancer. While mammography images have an important role in cancer diagnosis, it has been shown that the sensitivity of these images decreases with a high mammographic density [2].The mammographic density (MD) refers to the amount of fibroglandular tissue in the breast in proportion to the amount of fatty tissue. MD is an established risk factor for breast cancer. The general risk for developing breast cancer is increased with a higher density. Women with a density of 25% or higher are twice as likely to develop breast cancer, with 75% even five times, compared to women with a MD of less than 5% [3]. In addition there is a possibility that in dense breast a tumor may be masked on a mammogram [2]. Therefore it is necessary to consider the breast’s density when screening for breast cancer. Several studies aimed at supporting and improving breast cancer diagnosis with computer aided systems and feature evaluation and such studies have taken the MD into consideration when evaluating mammography images [4][5].
In order to detect those tumors that are masked on mammography or support inconclusive findings, often an additional ultrasound (US) is conducted on women with high MD [6]. However US images underlie a high inter-observer variability. Computer-aided diagnosis aims to develop a method to analyze US images and support diagnosis with the aim of reducing this variability. The approach of this thesfa is to transfer and ad just the methods designed by Häberle et al. [4] used for characterizing 2-D mammographic images in order to use them on 3-D ultrasound images while only focusing on features correlating with the MD.
Additionally, more features will be generated using deep leaming, as most of the recent computer-aided diagnosis tools do not rely on traditional methods anymore. Over the last years deep learning has become the standard when working in medical imaging and several studies have shown a promising perf ormance when working with breast ultrasound images [7][8].
U sing both traditional and deep leaming methods for extracting features aims to improve the classification of possibly cancerous tissue by building a reliable set of features which characterize the MD of the patient. Furthermore, the traditional features may help to interpret those generated through deep learning approaches, in turn, the latter may help to show the benefit of using deep leaming when analyzing medical images.
This thesis will cover the following points:

• Literature review of mammographic density as a risk factor for breast cancer and ultrasound as an additional screening method
• Extraction and evaluation of a variety of automated features in ultrasound images using traditional and deep leaming approaches
• Analyzing the relationship of the extracted features with the mammographic density

References

] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: a cancer joumal for clinicians, vol. 68, no.6,pp.394-424,2018.[2] P. E. Freer, “Mammographie breast density: impact on breast cancer risk and implications for screening,” Radiographics : a review publication of the Radiological Society of NorthAmerica, Inc, vol. 35, no. 2, pp. 302-315, 2015.
[3] V. A. McCormack, “Breast density and parenchymal pattems as markers of breast cancer risk: A meta-analysis,” Cancer Epidemiology and Prevention Biomarkers, vol. 15, no. 6, pp. 1159-1169, 2006.
[4] L. Häberle, F. Wagner, P. A. Fasching, S. M. Jud, K. Heusinger, C. R. Loehberg, A. Hein, C. M. Bayer, C. C. Hack, M. P. Lux, K. Binder, M. Elter, C. Milnzenmayer, R. Schulz-Wendtland, M. Meier-Meitinger, B. R. Adamietz, M. Uder, M. W. Beckmann, and T. Wittenberg, “Characterizing mammographic images by using generic texture features,” Breast cancer research: BCR, vol. 14, no. 2, 2012.
[5] M. Tan, F. Aghaei, Y. Wang, and B. Zheng, “Developing a new case based computer-aided detection scheme and an adaptive cueing method to improve performance in detecting mammographic lesions,” Physics in medicine and biology, vol. 62, no. 2, pp.358-376,2017.
[6] L. Häberle, C. C. Hack, K. Heusinger, F. Wagner, S. M. Jud, M. Uder, M. W. Beckmann, R. Schulz-Wendtland, T. Wittenberg, and P. A. Fasching, “Using automated texture features to determine the probability for masking ofa tumor on mammography, but not ultrasound,” Europeanjoumal of medical research, vol. 22, no. 1, 2017.
[7] H. Tanaka, S.-W. Chiu, T. Watanabe, S. Kaoku, and T. Yamaguchi, “Computer-aided diagnosis system for breast ultrasound images using deep Ieaming,” Physics in medicine and biology, vol. 64, no. 23, 2019.
[8] M. H. Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, A. K. Davison, R. Marti, and H. Y. Moi, “Automated breast ultrasound lesions detection using convolutional neural networks,” IEEEjoumal of biomedical and health informatics, vol.22,no.4,pp. 1218-1226,2018.

Comparing and Aggregating Face Presentation Attack Detection Methods

Dynamic Technology trend monitoring from unstructured data using Machine learning

New technologies are enablers for product and process innovations. However, in the multitude of available technologies on the market, identifying the relevant and new technologies for one’s own company and one’s own problem is associated with a high effort. ROKIN as a technology platform offers a key component for the rapid identification of new technologies and thus for the acceleration of innovation processes in companies. For this purpose, new technologies are identified in the Internet, profiles are created and these are made available to companies via an online platform. Companies are provided with suitable solution proposals for their specific problem.
ROKIN automates the individual steps for this process, from data collection via web crawler, through the matching process, to the visualization of information in technology profiles. A central point in this process is detecting newest technological trends in the market in the collected data. This allows companies to keep up with upcoming technological shifts.
Due to the recent successes with so-called “Transformer Models” (e.g. “Bidirectional Encoder Representations from Transformers” (BERT)), new possibilities in the recognition and understanding of texts are opening up like never before. These models were trained domain-independent using general information from Wikipedia and book corpus. An open question is how these approaches perform in a domain-specific context like engineering. Can the sentiment understanding of such algorithms be used to improve existing classical NLP keyword analysis and topic modelling for trend detection? Especially the early onset of a trend, where little evidence through keywords is given a sentiment understanding using transformer based approaches might help. The goal is therefore to implement and extend existing classical NLP algorithms with Transformer models and use the new model to identify trends in big amount of engineering text documents.
Tasks:
• Literature research and analysis of existing NLP tools for trend detection (transformers as well as classic keyword analysis and topic modelling approaches).
• Setting up an information database (via Web-Crawling and Google Search APIs) for a given problem out of the engineering environment of a company (topic provided by ROKIN).
• Semantic modelling and analysis of the information database for identifying technology trends by different approaches of NLP algorithms.
• Strengths and weaknesses evaluation in respect to the created algorithms and based on the individual results.
• Development of a strategy or approach for an ideal trend detection strategy. Specific to early stage trend detection.
• Evaluation and optimization of the algorithms and documentation of the results.

Machine-Learning-Based Status Monitoring of HVDC Converter Stations

Detection and semantic segmentation of human faces in low resolution thermal images

The detection and isolation of persons with elevated body core temperatures contribute to the reduction of the speed with which certain respiratory diseases are spreading throughout the population. Contactless temperature measurements with thermal cameras are used for fast screening of persons and for selecting those that should be checked more closely with accurate medical thermometers. The only accessible source for temperature information is typically only the face in public areas and its exposed skin segments. The offset between the person’s actual body core temperature and the skin temperature varies in a wide range. It depends on the ambient conditions, what the person did in the last few minutes and where he/ she came from, on the person’s body characteristics and of course it depends on the location of the observed skin segment, not to speak of any technical limitations from the camera itself. Currently the Bosch Sicherheitssysteme Engineering GmbH is investigating the dependency of the body core temperature offset on the location of the measured skin segment.
In this master thesis, a reasonable detection and semantic segmentation of the human face on a thermal image should be investigated. In order to do this, the following points shall be addressed:
– Literature research for state-of-the-art methods of face detection within thermal images
– Identification of the most effective method for the exact face position detection within a preselected image area, including prototypical implementation (e.g. with OpenCV)
– Preparation and annotation of thermal image data for the usage of face detection
– Comparison of a neural network based method for face detection and classical machine learning approaches for the application to low resolution thermal images, eventually including prototypical implementation
– Identification of the most promising methods to correlate a hotspot pixel location with a face section (chin, cheek, nose, forehead, etc.), including prototypical implementations
– Optional: Identification of the most promising methods to detect certain facial occlusions like facial hair (forehead, beard) or glasses, including prototypical implementation
As input for the investigation, existing field test data is available for analysis, but further dedicated lab experiments will certainly be required.

Quality Assurance and Clinical Integration of a Prototype for Intelligent 4DCT Sequence Scanning

With 1.8 million deaths worldwide in 2018 (353.000 deaths in Europe in 2012 [1]) lung cancer is the most deadly cancer disease [2]. The prognosis for lung cancer are quite poor, only 15% of the men (21% of the women) survive 5 years [3] .
75% of these patients receive radiation therapy [4]. Nevertheless, it is challenged by breathing-related movements which lead to artifacts possibly causing both incorrect diagnosis and dosimetric errors of the therapy itself. As a result, the target volume might not be covered by the scheduled amount of radiation.
Computed tomography (CT) is an essential part of the treatment planning process. While 3D CT images can correctly display static anatomy, 4D imaging additionally records the breathing cycles and synchronizes it retrospectively with the acquired images. Thus the results of a 4D CT scan are time-resolved data of a 3D volume.
4D CT imaging with fixed beam on/ off slots and irregular breathing can lead to missing data coverage in desired breathing states, known as a violation of the data sufficiency condition (DSC). [5] The caused artifacts are expressed in the image as a strong blurring of anatomical structures and requires in the worst case a second treatment planning CT and as a consequence a delay of patient treatment as well as additional dose.
The idea of the intelligent 4D CT (i4DCT)-algorithm is to improve data coverage to reduce these artifacts. During the initial learning period the patient-specific respiratory cycle is analyzed. For every slice the scanner generates data for a whole respiratory cycle. Based on an online comparison of reference and current breathing curves during data acquisition, the selection of beam on/off periods is adjusted. If the data sufficiency condition is fulfilled the scan is stopped and the table moves to the next z-position. This process is repeated until the targeted scan area is covered. [5]
To ensure the effective, safe and reliable use of i4DCT-algorithm in everyday clinical practice quality assurance must be given.
The aim of this Master’s thesis is to develop and perform quality tests. Subsequently results are evaluated and interpreted to draw conclusions for clinical application.
Phantom measurements are performed with the CIRS Motion Thorax Phantom (CIRS, Norfolk, USA). This is a lung-equivalent solid epoxy rod containing a soft tissue target (symbolizing the tumor). In order to get close to realistic circumstances, the target can be moved by CIRS Motion Software according to an artificially created, irregular breathing pattern in three dimensions. The breathing curve is tracked by the Varian ‘respiratory gating for scanners’ system (RGSC, Varian Medical Systems, Inc. Palo Alto, CA). It consists of two main parts. All measurements are performed on SOMATOM go Open Pro CT scanner (Siemens Healthcare, Forchheim, Germany).
The tests include different reconstruction methods (Maximum Intensity Projection and amplitude/ phase based reconstruction), investigating the dimensions of the artificial tumor in every body axis, verifying the match of recorded breathing pattern in RGSC and CT as well as testing the limits of RGSC/ i4DCT algorithm.

 

Refernces

[1] J. Ferlay, E. Steliarova-Foucher, J. Lortet-Tieulent, S. Rosso, J. W. W. Coebergh, H. Comber, D. Forman und F. I. Bray, „Cancer incidence and mortality patterns in Europe: Estimates for 40 countries in 2012,“ European Journal of Cancer, Bd. 49, Nr. 6, pp. 1374-1403, 2013.
[2] World Health Organisation (WHO), „Cancer,“ 2018. [Online]. Available:
https://www.who.int/news-room/fact-sheets/detail/cancer. [Zugriff am 10 09 2020].
[3] Zentrum für Krebsregisterdaten, „Lungenkrebs (Bronchialkarzinom),“ 17 12 2019. [Online]. Available:
https://www.krebsdaten.de/Krebs/DE/Content/Krebsarten/Lungenkrebs/lungenkrebs_node.html. [Zugriff am 10 09 2020].
[4] R. Werner, „Strahlentherapie atmungsbewegter Tumoren: Bewegungsfeldschätzung und Dosisakkumulation anhand von 4D-Bilddaten,“ Springer Vieweg, 2013, p. 1.
[5] R. Werner, T. Sentker, F. Madesta, T. Gauer und C. Hofman, „Intelligent 4D CT sequence scanning (i4DCT): Concept and performance,“ Medical Physics, Nr. 46, pp. 3462-3474, 22 May 2019.

CITA: An Android-based Application to Evaluate the Speech of Cochlear Implant Users

Cochlear Implants (CI) are the most suitable devices for severe and profound deafness when hearing aids do not improve sufficiently speech perception. However, CI users often present altered speech production and limited understanding even after hearing rehabilitation. People suffering from severe to profound deafness may experience different speech disorders such as decreased intelligibility, changes in terms of articulation, slower speaking rate. among others. Though hearing outcome is regularly measured after cochlear implantation, speech production quality is seldom assessed in outcome evaluations. This project aims to develop an Android application suitable to analyze the speech production and perception of CI users.