Index

CITA: An Android-based Application to Evaluate the Speech of Cochlear Implant Users

Cochlear Implants (CI) are the most suitable devices for severe and profound deafness when hearing aids do not improve sufficiently speech perception. However, CI users often present altered speech production and limited understanding even after hearing rehabilitation. People suffering from severe to profound deafness may experience different speech disorders such as decreased intelligibility, changes in terms of articulation, slower speaking rate. among others. Though hearing outcome is regularly measured after cochlear implantation, speech production quality is seldom assessed in outcome evaluations. This project aims to develop an Android application suitable to analyze the speech production and perception of CI users.

Multimodal Breast Cancer Detection using a Fusion of Ultrasound and Mammogram Features

In this thesis, we aim to investigate multi-modal fusion techniques for breast lesion malignancy detection. In clinical settings, a radiologist acquires different image sequences (mammograms, US, and MRI) to precisely identify the lesion type. Relying on one modality has the risk of missing tumors or false diagnosis. However, combining information from different modalities can improve significantly the detection rate.

For example, the evaluation of mammograms on relatively dense breasts is known to be difficult, whereas ultrasound is then used to provide the information needed for a diagnosis. In other case, ultrasound is inconclusive, while mammograms offer clarity. There have been many computer-aided detection (CAD) models proposed that use either mammograms, e.g. or sonograms. However, there are relatively few studies that consider both modalities simultaneously for breast cancer diagnostic. With having this in mind, we assume that deep neural networks can also incorporate complementary features from two domains to improve the breast cancer detection rate.

Age Estimation on Panoramic Dental X-ray Images Using Deep Learning

X-rays are widely used in diagnostic medical imaging – in this Bachelor thesis, they will be used for automatic age determination. Thereby radiographs of the jaw can provide important clues about the age of the person because dental growth is less influenced by diet and hormones than skeletal growth. Compared with histological and biochemical methods is X-ray imaging significantly faster and facile.

As dental tissue is usually very well preserved after death and remains fairly unchanged for thousands of years, its analysis is widely used in forensics. Age determination on living persons is carried out to determine whether the child has reached the age of criminal responsibility or the majority if the birth certificate is not available.

However, the accuracy of the age determination by physicians is always doubted. On aver- age, the age estimation for children and adolescents differs by about half a year and about two years in the case of particularly serious inaccurate estimates. For adults, the result is usually even less accurate. Therefore, in the context of this bachelor thesis, an attempt will be made to develop a deep learning algorithm for age estimation. Since promising results have already been achieved with Deep Learning in other areas of medical image analysis – automated solutions could support physicians in estimating the age of the patient, in order to achieve more reliable results. The neuronal networks will be trained with a data set of 12 000 panoramic dental X-rays labeled with the age of the patients in days and provided by the University Hospital Erlangen. So the aim is to develop a supervised approach. Since convolutional neural networks (CNNs) have already achieved good results in other areas of medical image analysis [4], they will also be used for this task.

Synthetic Image Rendering for Deep Learning License Plate Recognition

The recognition of license plates is usually considered a rather simple task, that a human
is perfectly capable of. However, there exist many factors (e.g. fog, rain), that can
signicantly worsen the image quality and therefore increase the diculty of recognizing
a license plate. In addition, further factors e.g. low resolution or small size of the license
plate section may increase the diculty up to a point, where even humans are unable to
identify it.
A possible approach to solve this problem is to build and train a neural network using
collected image data. In theory, this should yield a high success rate and outperform a
human. However, a huge number of images, that also fulll certain criteria, is needed in
order to reliably recognize plates in dierent situations.
That is the reason why this thesis aims at building and training a neural network, that is
based on an existing CNN [1], for recognizing license plates using training data, which is
articially created. This ensures enough images are provided, while facilitating the possibility
of adding image eects to simulate many possible situations. The needed images
can be created using Blender: It oers the option to create a 3D model of a license plate,
as well as options to simulate certain weather conditions like fog or rain, while also providing
an API to automate the creation process. This way, nearly all cases can be covered
and the described procedure maximizes the success rate of the license plate detection.

The thesis consists of the following steps:

ˆ Creating a training data set consisting of generated license plate images (Blender
Python API)

ˆ Fitting the parameters of the Deep Learning model

ˆ Evaluation of the model t on datasets with real license plate images

Literatur
[1] Benedikt Lorch, Shruti Agarwal, and Hany Farid. Forensic Reconstruction of Severely
Degraded License Plates. In Society for Imaging Science & Technology, editor,
Electronic Imaging, Jan 2019.

Learning projection matrices for marker free motion compensation in weight-bearing CT scans

The integration of known operators into neural networks has recently received more and
more attention. The theoretical proof of its bene ts has been described by Maier and Syben
et al. in [1, 2]. Reducing the number of trainable weights by replacing trainable layers with
known operators reduces the overall approximation error and makes it easier to interpret
the layers function. This is of special interest in the context of medical imaging, where it is
crucial to understand the e ects of layers or operators on the resulting image. Several use
cases of know operators in medical imaging have been explored in the past few years [3][4][5].
An API to make such experiments easier is the PYRO-NN API by Syben et al. which comes
with several forward and backward projectors for di erent geometries as well as with helpers
such as lters [6].

Cone Beam CT (CBCT) imaging is a widely used X-Ray imaging technology which uses
a point source of X-rays and a 2D  at panel detector. Using an reconstruction algorithm
such as the FDK algorithm, a complete 3D reconstruction can be estimated using just one
rotation around the patient [7]. This modality is of great use in orthopedics were so called
weight bearing CT scans image primarily knee joints underweight bearing conditions to
picture the cartilage tissue under stress. The main drawback of this modality are motion
artifacts caused by involuntary movement of the patients knee and inaccuracies in the trajectory
of the scanner. In order to correct those artifacts, the extrinsic camera parameters,
which describe the position and orientation of the object relative to the detector have to be
adjusted [8].

To get one step closer to reduce motion artifacts without additional cameras or markers, it is
of special interest to study the feasibility of training extrinsic camera parameters as part of
a reconstruction pipeline. Before we can assess an algorithm to estimate those parameters,
the general feasibility of training the extrinsic camera parameters of a projection matrix
will be studied. The patients motion will be estimated iterative using a adapted gradient
descent algorithms, known from the training of neural networks.

The Bachelor’s thesis covers the following aspects:

1. Discussing of the general idea of motion compensation in CBCT as well as an quick
overview of the PYRO-NN API and thus into known Operators in general.

2. Study feasibility to learn a projection matrix of a single forward projection:
 .Assessing the ability to train single parameters
 Training of translations and rotations
 Attempt estimate the complete rigid motion parameters

3. Training of a simple trajectory:
 Assessing the motion estimation of the back projection using the volume as
ground truth
 Assessing the motion estimation using a undistorted sinogram
 Estimate the trajectory only based on the distorted sinogram

4. Evaluation of the training results of the experiments and description of potential applications
of the results.

All implementations will be integrated into the PYRO-NN API [6].

References
[1] A. Maier, F. Schebesch, C. Syben, T. Wur , S. Steidl, J. Choi, and R. Fahrig, \Precision
learning: Towards use of known operators in neural networks,” in 2018 24th International
Conference on Pattern Recognition (ICPR), pp. 183{188, 2018.

[2] A. K. Maier, C. Syben, B. Stimpel, T.Wur, M. Ho mann, F. Schebesch, W. Fu, L. Mill,
L. Kling, and S. Christiansen, \Learning with known operators reduces maximum error
bounds,” Nature machine intelligence, vol. 1, no. 8, pp. 373{380, 2019.

[3] W. Fu, K. Breininger, R. Scha ert, N. Ravikumar, T. Wur , J. G. Fujimoto, E. M.
Moult, and A. Maier, \Frangi-Net: A Neural Network Approach to Vessel Segmentation,”
in BildVerarbeitung fur die Medizin (BVM) 2018 (H. H. K. H. M.-H. C. P. T. T.
Andreas Maier, Thomas M. Deserno, ed.), (Berlin, Heidelberg), pp. 341{346, Springer
Vieweg, Berlin, Heidelberg, 2018.

[4] C. Syben, B. Stimpel, K. Breininger, T. Wur , R. Fahrig, A. Dorer, and A. Maier,
\Precision Learning: Reconstruction Filter Kernel Discretization,” in Proceedings of
the 5th International Conference on Image Formation in X-ray Computed Tomography,
pp. 386{390, 2018. UnivIS-Import:2018-09-11:Pub.2018.tech.IMMD.IMMD5.precis 0.

[5] T. Wur , F. C. Ghesu, V. Christlein, and A. Maier, \Deep learning computed tomography,”
in Medical Image Computing and Computer-Assisted Intervention – MICCAI
2016 (S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, eds.), (Cham),
pp. 432{440, Springer International Publishing, 2016.

[6] C. Syben, M. Michen, B. Stimpel, S. Seitz, S. Ploner, and A. K. Maier, \Technical note:
Pyro-nn: Python reconstruction operators in neural networks,” Medical Physics, 2019.

[7] L. Feldkamp, L. C. Davis, and J. Kress, \Practical cone-beam algorithm,” J. Opt. Soc.
Am, vol. 1, pp. 612{619, 01 1984.

[8] J. Maier, M. Nitschke, J.-H. Choi, G. Gold, R. Fahrig, B. M. Esko er, and A. Maier,
\Inertial measurements for motion compensation in weight-bearing cone-beam ct of the
knee,” 2020.

Clustering of HPC jobs using Unsupervised Machine Learning on job performance metric time series data

Deep Learning-based Matching of Chest X-Ray Scans

The use of human identification has become an increasingly important factor over the past years, with
facial recognition being potentially the most common form used in daily life. But the face is not the
only biometric identifier that can be used as a feature for identification. In this work, we will investigate
chest X-rays as biometric identifiers. If they were proven to be viable, it would for example allow
identification post mortem, where common techniques currently have shortcomings [1]. Also, a success
in such a way of identification may have far-reaching consequences and implications concerning data
protection and anonymity in the medical field.
In pattern recognition, the use of deep learning has proven to be successful in improving or even
replacing classical methods entirely. To test the limits of what is currently possible, a neural network
will be created that takes in two different x-ray scans as inputs and outputs a score measuring their
similarity.
To increase the chances of success, a registration step will be incorporated in the preprocessing step. It
will be be implemented as a neural network layer, as this has proven to be effective in the past [2].
The thesis consists of the following milestones:
• Testing out the capabilities of different network architectures concerning the task of finding
matches in chest X-Ray scans
• Further enhancing the functionality by incorporating a layer into the network that is capable of
affine registrations, e. g. by means of a spatial transformer network [3]
The implementation should be done in Python.

 

References
[1] Ryudo Ishigami, Thi Thi Zin, Norihiro Shinkawa, and Ryuichi Nishii. Human identification using x-ray
image matching. In Proceedings of The International MultiConference of Engineers and Computer Scientists
2017, volume 1, pages 415–418, 2017.
[2] Grant Haskins, Uwe Kruger, and Pingkun Yan. Deep learning in medical image registration: a survey.
Machine Vision and Applications, 31(1–2), Jan 2020.
[3] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks.
In Advances in Neural Information Processing Systems 28, pages 2017–2025. Curran Associates, Inc., 2015.

Start, follow, read, stop: Incorporating new steps into end-to-end full-page handwriting recognition method

In this work, new steps are incorporated into a known offline recognition method [1] as an attempt to
improve the transcription of degraded and poor-quality historical documents. The previously proposed
model consists of three components:
1. Start-of-line (SOL)
This network predicts the starting points of lines, together with an indication of the size and
direction of the handwriting.
2. Line-follower (LF)
Given a starting point, the LF network follows the handwriting line in incremental steps and
outputs a dewarped line image that is suitable for text recognition purposes.
3. Handwriting recognition (HWR)
After having the LF network produce several normalized line images, these can then be fed to a
CNN-LSTM HWR network [2] to produce transcriptions of the detected lines.
The method performed well on warped lines and has the advantage of outputting polygonal regions
instead of bounding boxes [3], but it still has several shortcomings, specially when considering
documents where unrelated pieces of information are frequently horizontally adjacent to one another.
It cannot detect and adapt to changes in handwriting size either, relying solely on the initial prediction
made by the SOL network to extract lines.
Modifications are to be made to the network architecture of the model in order to address these
shortcomings, and the thesis would then consist of the following milestones:
• Extending the SOL network architecture in order to include End-of-Line (EOL) detection.
• Modifying the LF network architecture to capture variations in handwriting size.
• Applying the LF network backwards from EOL predictions and finding an effective way of
merging both line information.
• Evaluating performance on historical full page datasets.
• Further experiments regarding procedure and network architecture.

The implementation should be done in Python.

References
[1] Davis B. Barrett W. Price B. Cohen S. Wigington C., Tensmeyer C. Start, follow, read: End-to-end full-page
handwriting recognition. Computer Vision – European Conference on Computer Vision 2018 (ECCV) pages
372-388, 2018.
[2] Stewart S. Davis B. Barrett W. Price B. Cohen S. Wigington, C. Data augmentation for recognition of
handwritten words and lines using a cnn-lstm network. 14th International Conference on Document Analysis
and Recognition (ICDAR) pp. 639–645, 2017.
[3] Wolf C. Moysset B., Kermorvant C. Full-page text recognition: Learning where to start and when to stop.
14th IAPR International Conference on Document Analysis and Recognition (ICDAR), 2017.

Development of a pre-processing/simulation Framework for Multi-Channel Audio Signals

The goal of this thesis is to develop a framework that simulates multi-channel audio signals in a 2D/3D environment for hearing aids. For this purpose, existing head related transfer functions (HRTFs) will be used to simulate direction and hearing aid microphone characteristics. Furthermore, source movement as well as microphone movement and rotation will be implemented. The latter is mandatory for hearing aids, since especially head rotation might change the relative direction of the different sources significantly. The framework will be able to simulate multiple speakers as well as multiple noise sources. To calculate a clean speech target, a provided reference beamformer will be used on the target speech only, neglecting noise and non target speakers. Optionally, an opening angle that defines the target directions can be used to extract the clean speech targets. As a second optional aspect, room environment including absorption and reverberation will be simulated. Therefore, a reference implementation can be used.

Semantic Segmentation of the Human Eye for Driver Monitoring