Automatic Pathological Speech Intelligibility Assessment Using Speech Disentanglement Without Bottleneck Information

DL-based 3D co-registration of data with strongly varying contrast — applied to CEST MRI

Multi-exponential analysis of compartments in MRI images with quantitative T2-mapping

Matrix Operations for Applications in Quantum Annealing

Quantum annealing is a promising technology for quantum
computing to solve quadratic problems. D Wave makes
quantum annealers and provides an open source Python
interface: Ocean [1]. Ocean’s hybrid models do not yet
support matrix problem formulations. Current approaches are
based on SymPy [ 2]–> slow for matrix problems


Speed up matrix operations for problem formulations



Development of a comprehensive SPECT phantom dataset using Monte Carlo Simulation


Single Photon Emission Computed Tomography (SPECT) [1] is a medical imaging technique used
to study the biological function and detection of various diseases in humans and animals. Due
to the low amount of radioactivity typically used in SPECT scans, we have a lot of noise in our
SPECT acquired images, and because it is an inverse problem we do not have an exact ground truth.
For this reason we simulate objects with numerical ground truth, that will be used to create our
simulated dataset. The created dataset can then be used to train a Neural Network, analyze noise,
test multiple reconstruction techniques or evaluate the effects of acquisition geometry.
The objective of this research laboratory is to generate a large dataset of SPECT images, that will
be useful in the applications of deep learning in medical image processing.


We simulate 100 phantoms with different shapes and properties e.g. attenuation and activity maps.
Simulating simple geometric phantoms such as spheres, cubes and cylinders is the first step of
this research laboratory. In the following step we generate alphabetic letters phantoms. Last we
simulate more realistic physical phantoms like the Shepp-Logan or XCAT phantoms. To simulate
measurements of these phantoms, we use SIMIND, a Monte Carlo based simulation program [2].
SIMIND can describe different scintillation cameras, that can be used to obtain sets of projection
images of the simulated phantom. SIMIND allows the adjustment of different acquisition parameters
e.g. photon energy, number of projections, detector size, energy resolution, allowing the creation of a
comprehensive database of SPECT acquisitions in terms of geometry, and acquisition configuration.
After postprocessing the projection data, we obtain the reconstructed 3D images from the data by
applying iterative reconstruction techniques like Ordered Subset Expectation Maximization (OSEM)
and Ordered Subset Conjugate Gradient Minimization (OSCGM).

Expected Results

At the end of this research laboratory, the student shall have a deeper knowledge of Monte Carlo
Simulation (MCS) and reconstruction for SPECT imaging. Further, the student shall have created
a dataset that will be available for future projects, including denoising, reconstruction and other
image processing related tasks. Additionally, the student shall summarize their findings in a short
report and write a documentation about the database and how to use it.


[1] Miles N Wernick and John N Aarsvold. Emission tomography: the fundamentals of PET and
SPECT. Elsevier, 2004.
[2] Michael Ljungberg and Sven-Erik Strand. A monte carlo program for the simulation of scintillation
camera characteristics. Computer methods and programs in biomedicine, 29(4):257–272,

A Review of Diagnosis Rheumatoid Arthritis, with Evaluating Parameters of Micro-CT Scanner and Laboratory Measurements

Continuous Non-Invasive Blood Pressure Measurement Using 60GHz-Radar – A Feasibility Study

Hypertension – high blood pressure (BP) – is known to be a silent killer. Untreated, it can cause
severe damage to the human’s organs, mainly to the heart and kidneys [5, 6]. BP is
usually classified by using the highest – systolic – and the lowest – diastolic – pressures during one
cardiac cycle [2]. The gold standard for measuring BP remains the oscillometric method,
which is employed in traditional arm-cuffs [4]. This method, however, suffers from extensive
deficiencies: Discomfort leads to unreliable measures [2]. Additionally, it only captures
the static status of the very dynamic arterial BP and thus loses important variation information,
leading to poor time resolution [2, 3, 4, 7] However, there is a strong
need for continuous beat-to-beat BP readings [4], as they are more reliable predictors of
aforementioned cardiovascular risks than single readings [1].
The goal of this master thesis is to show whether it is feasible to use a 60GHz radar device to
continuously estimate BP. Radar is chosen as it has a very small form factor and very low power
consumption – both being favorable characteristics for integrating into a wearable device. The
radar is put into an 3D-printed enclosure which is fastened to the left wrist using a velcro strap. It
is capable of extracting the skin displacement caused by the expansion of the underlying artery,
which is localized using a beamforming algorithm. The extracted skin displacement contains the
pulse waveforms which are used for extracting the BP.
In literature, mainly two methods have been used to design continuous BP devices. One is based
on Pulse-Wave-Velocity, and in that context also Pulse-Transit-Time, the other is based on Pulse-
Wave-Analysis [4]. Since the first method depends on the usage of an electrocardiograph,
this method was not employed in this work, as the goal is to implement a stand-alone solution
which does not require additional devices. Therefore, the second method is implemented.
For that, the extracted skin displacement is split into individual pulse waveforms. Each is used
as input for a support vector machine, that decides whether it is good enough as an input for the
neural network, such that only sufficiently good waveforms are used. Then, 21 distinctive features
are extracted for the individual good waveforms. These features, together with the calibration
parameters gender, age, height and weight, are used as features for a neural network. The network
is then used to predict systolic and diastolic values.
It is expected that some correlation between the skin displacement, captured by the radar, and
the corresponding BP will become apparent, allowing for future research to further improve the



[1] D. Buxi, J.-M. Redout´e, and M. R. Yuce. Blood pressure estimation using pulse
transit time from bioimpedance and continuous wave radar. IEEE Transactions on
Biomedical Engineering, 64(4):917–927, 2016.
[2] Y. Kurylyak, F. Lamonaca, and D. Grimaldi. A neural network-based method for
continuous blood pressure estimation from a ppg signal. In 2013 IEEE International
instrumentation and measurement technology conference (I2MTC).
IEEE, 2013.
[3] M. Proenc¸a, G. Bonnier, D. Ferrario, C. Verjus, and M. Lemay. Ppg-based blood
pressure monitoring by pulse wave analysis: calibration parameters are stable for three
months. In 2019 41st Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), pages 5560–5563. IEEE, 2019.
[4] J. Sol`a and R. Delgado-Gonzalo. The handbook of cuffless blood pressure monitoring.
Springer. Available online at: https://link. springer. com/book/10, 1007:978–3, 2019.
[5] WHO. Hypertension.World Health Organization. URL:
[6] X. Xing, Z. Ma, M. Zhang, Y. Zhou, W. Dong, and M. Song. An unobtrusive and
calibration-free blood pressure estimation method using photoplethysmography and
biometrics. Scientific reports, 9(1):1–8, 2019.
[7] Y. Yoon, J. H. Cho, and G. Yoon. Non-constrained blood pressure monitoring using
ecg and ppg for personal healthcare. Journal of medical systems, 33(4):261–266, 2009.

Optimizing the Preprocessing Pipeline for “virtual Dynamic Contrast Enhancement” in Breast MRI

Detection of positions of K-Wires Tips in X-Ray Images using Deep learning

Semi-supervised learning for multi-modal bone segmentation

Since AlexNet won the ImageNet Challenge by a wide margin in 2012, the popularity of deep learning has been steadily increasing. In the last years, a technique that has been especially popular is semantic segmentation, as it is used in self-driving cars and medical image analysis. A big challenge that arises when training neural networks (NN) for this task is the acquisition of adequate segmentation masks, because the labeling has often times to be performed by industry experts and is very time consuming. Resulting from that, solutions circumventing this problem had to be found. A popular solution for this task is semi-supervised learning, where only a certain amount of the data is annotated. This approach has the obvious advantage of reducing the time needed for the data acquisition process, but NNs trained this way still have a worse performance compared to ones that were trained fully-supervised.

A common disease affecting one in three women and one in twelve men is osteoporosis. It’s symptoms include low bone mass and a deterioration of bone tissue, leading to an increased fracture risk. The malady affects especially elderly people and for their protection, providing diagnostic tools and suitable treatments is important [1]. Structures that can be found in the bone include lacunae containing osteocytes and trans-cortical vessels (TCV). Murine and human tibia consists of two parts; the inner trabecular bone and the outer cortical bone, where TVCs can be found. To study them and their importance for the development of osteoporosis, we are trying to automatically segment the cortical bone from the surrounding tissue. Additionally, we will attempt to build a NN for the detection of TVCs and lacunae.

We want to achieve this using a model based on convolutional neural networks (CNN) for semantic segmentation. Similar tasks have already been performed [2], but our approach differs as we try to use as few labels as possible for the training process. Methods we want to incorporate are pre-training and the use of image transformations to make the most out of a limited amount of segmentation masks. If those approaches do not yield the desired results, we will also try to incorporate techniques of weakly- and self-supervised learning.

In detail, the thesis will consist of the following parts:

• implementation of multiple CNN-based architectures [3][4] to find a suitable model for our task,

• optimization of this model using different approaches,

• evaluation of the usefulness of pre-training and different semi-supervised learning techniques,

• integration of different techniques to increase the accuracy

[1] S P Tuck and R M Francis. Osteoporosis. Postgraduate Medical Journal, 78(923):526–532, 2002.
[2] Oliver Aust, Mareike Thies, DanielaWeidner, FabianWagner, Sabrina Pechmann, Leonid Mill, Darja Andreev, Ippei Miyagawa, Gerhard Kronke, Silke Christiansen, Stefan Uderhardt, Andreas Maier, and Anika Gruneboom. Tibia cortical bone segmentation in micro-ct and x-ray microscopy data using a single neural network. In Klaus Maier-Hein, Thomas M. Deserno, Heinz Handels, Andreas Maier, Christoph Palm, and Thomas Tolxdorff, editors, Bildverarbeitung fur die Medizin 2022 , pages 333–338, Wiesbaden, 2022. Springer Fachmedien Wiesbaden.
[3] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597, 2015.
[4] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. CoRR, abs/1411.4038, 2014.
[5] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alche-Buc, E. Fox, and R. Garnett, editors, ´ Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019