Index

Two-Dimensional-Dwell-Time Analysis of Ion-Channel Kinetics using Deep Learning

In this project, we want to explore the capability of neural networks to infer kinetics of electrophysiological time series with Markov models. Patch-Clamp recordings of single ion channels provide a wealth of information on the functional properties of proteins that by far outperform macroscopic measurements. However, modelling of single-channel data is still a major challenge not only because it is very time-consuming. Likely, the most sophisticated way to relate kinetics and protein function is to utilize hidden Markov models (Huth et al., 2008). Scientists have developed several different methods for this purpose (Sakmann and Neher, 1995). All these methods share at least some of the following disadvantages. They require specific assumptions, specific corrections dependent on the time series, are sensitive to noise (Huth et al., 2006), are limited to the bandwidth of the recording system (Huth et al., 2006; Qin, 2014) or do not provide statistics to estimate how well they have approximated the data.

We have developed a 2D-Fit algorithm with simulations and have improved it over the years (Huth et al., 2006). The algorithm is based on the idealization of time series and the generation of two-dimensional-dwell-time-distribution from neighboring events. To a certain extent, it does not share afore mentioned limitations and it has some unique features that makes it superior to other tools. It does capture gating kinetics with a high background of noise and can extract rate constants even beyond the recording bandwidth. That could make the 2D-Fit exceptional valuable for relating electrophysiology kinetics with data from simulations of single protein molecules. In addition, 2D-distributions preserve the coherency of connected states. Thereby the algorithm can extract the full complexity of underlying models and distinguish different Markov models. However, the computational requirements are enormous, repeatedly for each time series that is analyzed. Neural networks have the reverse approach. Once datasets are generated and the networks are trained, time series could be analyzed in real time during experiments. It has to be determined whether deep networks are capable of outperforming the powerful simulation approach.

The basic aim of this thesis is to analyze two-dimensional-dwell-time-histograms with neuronal networks, a task of image analysis, to extract the underlying kinetics of Markov models. In parallel, another master student will explore the direct analysis of time series. Both approaches have to our knowledge not yet been investigated for Patch-Clamp data. It will be very interesting to compare the results of both approaches.

In the first part of the project, the objective is to generate training datasets with the 2D-Fit algorithm (already implemented) and to deploy networks capable to analyze simple Markov models (preliminary results are available for a 3-state model). The master student will evaluate the capabilities of the networks related to bandwidth and noise of time series. The next step will be to find strategies to increase the number of states of the underlying models that the network is able to distinguish. Finally, and not directly related, the capabilities of networks to distinguish different Markov models will be explored.  We expect that networks could really excel in this task of pattern recognition.

 

 

Sources

Huth T, Schmidtmayer J, Alzheimer C, Hansen UP (2008) Four-mode gating model of fast inactivation of sodium channel Nav1.2a. Pflugers Archiv European Journal of Physiology 457:103–119.

Huth T, Schroeder I, Hansen U-P (2006) The power of two-dimensional dwell-time analysis for model discrimination, temporal resolution, multichannel analysis and level detection. Journal of Membrane Biology 214:19–32.

Qin F (2014) Principles of single-channel kinetic analysis. Methods in Molecular Biology 1183:371–399.

Sakmann B, Neher E (1995) Single-Channel Recordings (Sakmann B, Neher E, eds)., 2nd ed. New York and Lodon: Plenum Press.

DeepTechnome – Mitigating Bias Related to Image Formation in Deep Learning Based Assessment of CT Images

Multi-task Learning for Historical Document Classification with Transformers

Description

As of recent, transformer models[1] have started to outperform the classic deep convolutional neural networks in many classic computer vision tasks. These transformer models consist of multi-headed self-attention layers followed by linear layers. The former layer soft-routes value information based on three matrix embeddings: query, key and value. The inner product of query and key are input into a softmax function for normalization and the resulting similarity matrix is multiplied with the value embedding. Multi-headed self-attention creates multiple sets of query, key and value matrices that are independently computed, then concatenated and projected into the original embedding dimension. Visual transformers excel in their ability to incorporate non-local information into their latent representation, allowing for better results when classification relevant information is scattered across the entire image.

The downside of pure attention models like ViT [2], which treat image-patches as sequence-tokens, is the requirement of lots of samples to make up for their lack of inductive priors. This makes them unsuitable for low-data regimes like historical document analysis. Further, the computation of the similarity matrix leads to a matrix quadratic in input length, complicating high-resolution computations.

One solution promising to alleviate the data hunger of transformers while still profiting from their global representation ability, is the usage of hybrid methods that combine CNN and self-attention layers. Those models jointly train a network comprised of a number of convolutional layers to preprocess and downsample inputs, followed by a form of multi-headed self-attention. [3] differentiates hybrid self-attention models into “transformer blocks” and “non-local blocks”, the latter of which is equivalent to single-headed self-attention sans the lack of value embeddings and positional encodings.

The objective of this thesis is the classification of script type, date and location of historical documents, using a single multi-headed hybrid self-attention model.

The thesis consists of the following milestones:

  • Construction of hybrid models for classification
  • Benchmarking on the ICDAR 2021 competition dataset
  • Further architectural analyses of hybrid self-attention models

References

[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,ŁukaszKaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
[2] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Un-terthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and NeilHoulsby. An image is worth 16×16 words: Transformers for image recognition at scale. InInternationalConference on Learning Representations, 2021.
[3] Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. Bottleneck transformers for visual recognition, 2021.

3D Segmentation of metal objects based on Cone-Beam CT Projection Images for Metal Artefact Removal

 

Computer Tomography (CT) imaging from intraoperative mobile C-Arms is commonly used to validate tool and implant placement during surgery. As a majority of tools and implants are composed of metal, physical effects such as beam hardening, photon scattering, and high absorption induce artefacts in the volume domain. These Metal Artefacts arise from a loss of signal in the projection images which is not accounted for in standard reconstruction algorithms. Metal Artefact Reduction (MAR) techniques rely on an accurate segmentation of the metal volume.[1], [2] This first segmentation step is commonly based on thresholding the volume domain which makes it prone to errors induced by metal artefacts. This thesis investigates an end-to-end trainable segmentation model which produces 3D-metal masks from 2D projection data of a 3D Cone Beam Scan of a Cios Spin System. The robustness against metal artefacts shall be evaluated and compared to common volume-domain metal segmentation approaches.

 

Low Dose Helical CBCT denoising by domain filtering with deep reinforcement learning improved by Neural Ordinary Differential Equations approach

In previous research, we have developed a method, based on reinforcement learning, to denoise cone-beam CT. This method involved the use of denoisers in both the sinogram and the reconstructed image domain. The denoisers are bilateral filters with the sigma parameters tuned by a convolutional agent.

Recent research has shown the use of neural ODEs to improve the speed of convergence of neural network training. Neural ODEs have been applied to tasks which can be modelled by differential equations, such as fluid mechanics. They have also been expanded to cover classical deep learning tasks, such as image segmentation.

In this thesis we aim to complete the following tasks:

  1. Experiment with different recon kernels (B40, B70 etc.) to observe the effect of sharpness dependent noise.
  2. Implement neural ODE to speed up reinforcement learning convergence, and also reduce parameter count.
  3. Implement data consistent reward to ensure correct reconstruction and data consistent denoising.
  4. Experiment with deep learned quality metrics as additional reward functions for parameter tuning

As a dataset, we will use the Mayo Clinic TCIA dataset for testing the quality of our denoising algorithms. Quality can be compared with standard dose images using PSNR and SSIM, and can be calculated reference-free using the IRQM. If time permits, we can use deep model observers to assess low contrast preservation.

Requirements:

  • Knowledge of CT reconstruction techniques. Knowledge of the ASTRA toolbox is a plus.
  • Understanding of reinforcement learning
  • Experience with PyTorch for developing neural networks
  • Experience with image processing

Interpolation of deformation field for brain-shift compensation using Gaussian Process

Brain shift is the change of the position and shape of the brain during a neurosurgical procedure due to more space after opening the skull. This intraoperative soft tissue deformation limits the use of neuroanatomical overlays that were produced prior to the surgery. Consequently, intraoperative image updates are necessary to compensate for brain shift.

Comprehensive reviews concerning different aspects of intraoperative brain shift compensation can be found in [1][2]. Recently, feature based registration frameworks using SIFT features [3] or vessel centerlines [4] has been proposed to update the preoperative image in a deformable fashion, whereas point matching algorithm such as coherent point drift [5] or hybrid mixture model [4] are used to establish point correspondences between source and target feature point set. In order to estimate a dense deformation field according to the point correspondence, B-spline [6] and Thin-plate-spline [7] interpolation techniques are commonly used.

Gaussian process [8] (GP) is a powerful machine learning tool, which has been applied for image denoising, interpolation and segmentation. In this work, we are aiming at the application of different GP kernels for brain shift compensation. Furthermore, GP-based interpolation of deformation field is compared with the state-of-the-art methods.

In detail, this thesis includes the following aspects:

  • Literature review of state-of-the-art method for brain shift compensation using feature-based algorithms
  • Literature review of state-of-the-art method for the interpolation of deformation field/vector field
  • Introduction of Gaussian Process (GP)
  • Integrate GP-based interpolation technique into feature based brain shift compensation framework
    • Estimate dense deformation field from a sparse deformation field using GP
    • Implementation of at least three different GP kernels
    • Compare the performance of GP and state-of-the-art image interpolation techniques on various dataset, including synthetic data, phantom data and clinical data, with respect to accuracy, usability and run time.

[1] Bayer, S., Maier, A., Ostermeier, M., & Fahrig, R. (2017). Intraoperative Imaging Modalities and Compensation for Brain Shift in Tumor Resection Surgery. International Journal of Biomedical Imaging, 2017 .

[2] I. J. Gerard, M. Kersten-Oertel, K. Petrecca, D. Sirhan, J. A. Hall, and D. L. Collins, “Brain shift in neuronavigation of brain tumors: a review,” Medical Image Analysis, vol. 35, pp. 403–420, 2017.

[3] Luo J. et al. (2018) A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation. In: Frangi A., Schnabel J., Davatzikos C., Alberola-López C., Fichtinger G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, vol 11073. Springer, Cham

[4] Bayer S, Zhai Z, Strumia M, Tong.  XG, Gao Y, Staring M, Stoe B, Fahrig R, Arya N, Meier. A, Ravikumar N. Registration of vascular structures using a hybrid mixture model in: International Journal of Computer Assisted Radiology and Surgery, Juni 2019

[5] Myronenko, A., Song, X.: Point set registration: Coherent point drift. IEEE Trans.Pattern. Anal. Mach. Intell.32 (12), 2262-2275 (2010)

[6] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” in IEEE Transactions on Medical Imaging, vol. 18, no. 8, pp. 712-721, Aug. 1999.

[7] F. L. Bookstein, “Principal warps: thin-plate splines and the decomposition of deformations,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 6, pp. 567-585, June 1989.

[8] C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006

Covert Channel Vulnerabilities of Online Marketplaces – Impact on Antitrust Laws

Antitrust laws (also referred to as competition laws) are developed to promote vigorous competition, and has the purpose to protect consumers from predatory business practices. The paramount objectives of antitrust law are to guarantee working mechanism of markets as well as ensure a fair competition. A prominent example of infringement of antitrust law is illegal price fixing. By definition, it is an agreement among competitors that stabilize prices or other competitive terms, therefore violating the principle of price establishing mechanism through free-market forces. A typical attribute of illegal price fixing practice is the provable communication (written or oral) between human market participants.

However, in the era of digitalization and e-commerce, the detection of this illegal practice is facing new challenges, since the price establishing mechanism is partially or fully automated (i.e., automated dynamic pricing) and the market participants are not necessarily human beings. Consequently, new technological opportunities are available to hide illegal pricing politics. One possible scenario/risk is to utilize the so-called covert channel to transfer information that facilitate the illegal price fixing practice.

A communication channel is called covert, if it is not originally designed for the communication purpose [1]. Generally, it can be categorized into two groups, namely resource and time channel. To date, it is known as one of the most challenging phenomena in the cyber security. Several publications have demonstrate the applications that use covert channel to transfer critical information [2][3]. The goal of this thesis is therefore to investigate the vulnerability of online market places with regard to illegal price fixing practices under covert channel attack. Following aspects have to be included in this work:

  • Literature review of state-of-the-art with regard to covert channel,
  • Simulate a price fixing scenario on an e-commerce market place utilizing covert channel to transfer information,
  • Comparison of covert channel and conventional communication channel,
  • Derive implications and consequences for antitrust law.

[1] Hans-Georg Eßer, Felix C. Freiling. Kapazitätsmessung eines verdeckten Zeitkanals über HTTP, Univ. Mannheim, Technischer Bericht TR-2005-10, November 2005

[2] Freiling F.C., Schinzel S. (2011) Detecting Hidden Storage Side Channel Vulnerabilities in Networked Applications. In: Camenisch J., Fischer-Hübner S., Murayama Y., Portmann A., Rieder C. (eds) Future Challenges in Security and Privacy for Academia and Industry. SEC 2011. IFIP Advances in Information and Communication Technology, vol 354. Springer, Berlin, Heidelberg.

[3] Davide B. Bartolini, Philipp Miedl, and Lothar Thiele. 2016. On the capacity of thermal covert channels in multicores. In Proceedings of the Eleventh EuroSys ’16. Association for Computing Machinery, New York, NY, USA, Article 24, 1–16.

Restoring lung CT images from photographs for AI ap- plications

Motivation: Interstitial lung diseases (ILD) describe a group of acute or chronic diseases
of the interstitium or the alveoli [1]. The diagnosis of ILD is very challenging since there are
more than 200 di erent diseases with each of them occurring only rarely. The modality of
choice for diagnosing ILD is computed tomography (CT), even though the di erent diseases
cause similar or sometimes even identical imaging signs in the lung. Therefore, the results of
the CT-scan have to be combined with additional information like the history of the patient,
the symptoms and the laboratory values [2]. Approaches to assist doctors by including
machine learning algorithms like a similar patient search (SPS) already exist [3]. The idea
is to develop an app to take a photograph of the CT-scan and process the image in order
to start a SPS. The main focus of this work will be on the processing of the photograph in
order to restore the CT-properties of the original scan.
Methods: Taking photographs of a CT-scan on a screen leads to a loss of the Houns eld
Units and introduces artifacts like moire patterns, light and mirroring artifacts and imbalanced
illumination. To restore the lung CT image from a photograph, a traditional
approach using lters in contrast to a deep learning approach will be investigated. The
new approach subtracts the screen pixel array in order to avoid moire patterns, removes
the other most critical artifacts from the photograph and restores the lung CT window by
converting the pixel values of the photograph back into Houns eld Units. The processed
photograph can then be send to the SPS tool in order to help doctors nd the right diagnosis.
The Master’s thesis covers the following aspects:
1. Identi cation of the most critical artifacts appearing in photographs
2. Investigation of traditional and deep learning based approaches for artifact reduction
3. Determination of reading room conditions
4. Determination of an adequate framework and test criteria
5. Implementation of an image processing algorithm based on a literature research and
the identi ed artifacts
6. Evaluation of the proposed method
Supervisors: Dr. Daniel Stromer, Dr. Christian Tietjen, Dr. Christoph Speier,
Dr. med. Johannes Haubold, Prof. Dr.-Ing. habil. Andreas Maier

References
[1] B. Schonhofer and M. Kreuter, \Interstitielle lungenerkrankungen,” in Referenz Inten-
sivmedizin (G. Marx, K. Zacharowski, and S. Kluge, eds.), pp. 287{293, Stuttgart: Georg
Thieme Verlag, 2020.
[2] M. Kreuter, U. Costabel, F. Herth, and D. Kirsten, eds., Seltene Lungenerkrankungen.
Berlin and Heidelberg: Springer, 2015.
[3] Siemens Healthcare GmbH, \Similar patient search: syngo.via: Va20a,” 2021.
1

Automation of flow cytometry diagnostics workflow for leukemia diagnostics by leveraging machine learning

Background: FCM – Flow cytometry is a technique for measuring the physical and chemical properties
of individual cells suspended in a fluid stream. FCM is widely used in immunology, in many clinical and
biomedical laboratories for diagnosis, subclassification and post-treatment monitoring of blood cancers or
leukemias. Generally, a single session of FCM produces multidimensional readouts of 10,000 to 1,000,000
cells with 4 to 12 parameters.
The conventional workflow of diagnostics involves visualization of the FCM dataset in a series of 2-D scatter
plots and evaluate the different characteristics of cell populations by experts. Based on the inspection, the
pathologists identify a sub-population of cells (gating) and quantifies for further analysis/diagnosis.
Motivation: However, the conventional analytic process is performed manually on a sequence of two-
dimensional scatter plots. Repeating this process on multiple data sets is very time consuming and labour-
intensive. This problem leads to different clinical decisions depending upon the individuals who perform it
and causes more challenges.
Approach: Our approach is to automatize these conventional workflows by leveraging machine learning
techniques thereby supporting the pathologists/clinicians in their daily routine or research work. The main
objective of this thesis is to focus on the identification of small amounts of residual atypical cells in patients
with leukemia (minimal residual disease – MRD) in an automated fashion.
The following is an overview of the tasks involved in the development of the project:
1. Data Selection: Finding an unsupervised algorithm to search for “islands” that contain mainly events
from the same sample, but only a few events from different samples.
2. Dimensionality Reduction Algorithms[1]: Implementing other algorithms (umap) and validating the
effect against the existing t-SNE algorithm.
3. Optimization: Performing optimization of SNE based on OptSNE algorithm [2].
4. Performing evaluation and testing
References
[1] Y. Saeys, S. Van Gassen, and B. Lambrecht, “Computational flow cytometry: Helping to make sense of
high-dimensional immunology data,” Nature Reviews Immunology, vol. 16, 06 2016.
[2] A. C. Belkina, C. O. Ciccolella, R. Anno, R. Halpert, J. Spidlen, and J. E. Snyder-
Cappione, “Automated optimized parameters for t-distributed stochastic neighbor embedding
improve visualization and allow analysis of large datasets,” bioRxiv, 2019. [Online]. Available:
https://www.biorxiv.org/content/early/2019/05/17/451690

Prostate Lesion Detection using Multi-Parametric Magnetic Resonance Imaging