Index

Manifold Forests

Random Forests for Manifold Learning

 

Description: There are many different methods for manifold learning, such as Locally Linear Embedding, MDS, ISOMAP or Laplacian Eigenmaps. All of them use a type of local neighborhood that tries to approximate the relationship of the data locally, and then try to find a lower dimensional representation which preserves this local relationship. One method to learn a partitioning of the feature space is by training a density forest on the data [1]. In this project the goal is to implement a Manifold Forest algorithm that finds a 1-D signal of length N in a series of N input images by learning a density forest on the data and afterwards applying Laplacian Eigenmaps on the data. For this, existing frameworks, like [2], [3], or [4] can be used as forest implementation. The Laplacian Eigenmaps algorithm is already implemented and can be integrated.

The concept of Manifold Forests is also introduced in the FAU lecture Pattern Analysis by Christian Riess, which makes candidates who have already heard this lecture preferred.

This project is intended for students wanting to do a 5 ECTS sized module like a research internship, starting now or asap. The project will be implemented in Python.

 

References:

[1]: Criminisi, A., Shotton, J., & Konukoglu, E. (2012). Decision Forests: A Unified Framework for Classification, Regression, Density Estimation, Manifold Learning and Semi-Supervised Learning. Foundations and Trends® in Computer Graphics and Vision, 7(2–3), 81–227. ; https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CriminisiForests_FoundTrends_2011.pdf

[2]: https://github.com/CyrilWendl/SIE-Master

[3]: https://github.com/ksanjeevan/randomforest-density-python

[4]: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomTreesEmbedding.html#sklearn.ensemble.RandomTreesEmbedding

 

Transfer Learning for Re-identification on Chest Radiographs

Helical CT Reconstruction with Bilateral Sinogram/Volume Domain Denoisers

Helical CT is the most commonly used CT scan protocol in clinical CT today. Helical CT generally applies a cone-beam scan in a spiral trajectory over the object to be scanned. The collected sinograms, and subsequently reconstructed volumes, contain some amount of noise due to fluctuations in the line integrals. Removing this noise is necessary for diagnostic image quality.

In previous research, we have developed a method, based on reinforcement learning, to denoise cone-beam CT. This method involved the use of denoisers in both the sinogram and the reconstructed image domain. The denoisers are bilateral filters with the sigma parameters tuned by a convolutional agent. The reconstruction was carried out by the FDK algorithm in the ASTRA toolbox.

Due to the lack of time, we had limited our previous research to the simpler problem of circular cone-beam CT. In this research internship, we hope to extend our method to denoise helical CT as well. Since helical CT uses cone-beam projections, we hope that our method will work out of the box without any retraining being needed.

The following tasks are to be conducted as part of this research internship:

  1. Develop methods to reconstruct helical CT for the given sinograms i.e. ADMM, WFBP
  2. Formulate and train a reinforcement learning task to train denoisers for helical CT in sinogram and volume domain
  3. Figure out ways to train tasks without ground truth volumes, to obtain image quality better than currently existing methods
  4. Train current volume based neural network solutions (GAN-3D, WGAN-VGG, CPCE3D, QAE, etc.) and compare the solutions.

Requirements:

  • Knowledge of CT reconstruction techniques
  • Understanding of reinforcement learning
  • Experience with PyTorch for developing neural networks
  • Experience with image processing. Knowledge of the ASTRA toolbox is a plus.

Deep Learning based Model Observers for Multi-modal Imaging

Task based measurements are needed to measure the quality of medical images. Common task based measures include the use of model observers. Model observers are used to measure the confidence that an object (eg. A tumor or another structure) is present in a particular image.  Common model observers in medical imaging include the Chanellised Hotelling Observer for CT image quality, or the Non Pre Whitening Filter for SPECT image quality.

Current implementations of model observers for task based measurements are executed on phantoms. The use of phantoms makes the use of task based measurements an SKE/BKE task. However, this means that the use of task based measurements cannot be directly moved into a clinical task without prior knowledge. Moreover, multiple noise realisations of a single phantom are needed to get meaningful results from a model observer.

Deep learning has been used to replicate the behaviour of model observers. However, there is no work done on a general model observer which can work across imaging modalities. In this project, we would be interested in investigating the following:

  1. The possibility of using deep learning to create a ‘general’ model observer
  2. Cross modality performance of a deep learned model observer
  3. Training this model observer with zero-, one-, or few- shot learning for greater future generalisation.

We would look for someone who could support us with the following skills:

  1. Knowledge in Python/C++ programming
  2. Some knowledge on image processing. Medical image processing and DICOM standards are a plus.
  3. Knowledge of relevant libraries like NumPy, OpenCV, PyTorch, TensorFlow
  4. Experience with model observers is a plus (not strictly necessary)

Augmentation of CT Images by Variation of Non-Rigid Deformation Vector Field Amplitudes

Synergistic Radiomics and CNN Features for Multiparametric MRI Lesion Classification

Breast cancer is the most frequent cancer among women, impacting 2.1 million women each year. In order to assist in diagnosing patients with breast cancer, to measure the size of the existing breast tumors and to check for tumors in the opposite breast, breast magnetic resonance imaging (MRI) can be applied. MRI enjoys the advantages that patients won’t suffer from ionizing radiation during the examination, and it can capture the entire breast volume. In the meanwhile, machine learning methods have been proved to accurately classify images by assigning the probability score to estimate the likelihood of an image belonging to a certain category in many fields. With the properties mentioned above, this project aims to investigate whether applying machine learning approaches to breast tumor MRI can provide an accurate prediction on the tumor type (malignant or benign) for the diagnosing purpose.

Dilated deeply supervised networks for hippocampus segmentation in MR

Tissue loss in the hippocampi has been heavily correlated with the progression of Alzheimer’s Disease (AD). The shape and structure of the hippocampus are important factors in terms of early AD diagnosis and prognosis by clinicians. However, manual segmentation of such subcortical structures in MR studies is a challenging and subjective task. In this paper, we investigate variants of the well known 3D U-Net, a type of convolution neural network (CNN) for semantic segmentation tasks. We propose an alternative form of the 3D U-Net, which uses dilated convolutions and deep supervision to incorporate multi-scale information into the model. The proposed method is evaluated on the task of hippocampus head and body segmentation in an MRI dataset, provided as part of the MICCAI 2018 segmentation decathlon challenge. The experimental results show that our approach outperforms other conventional methods in terms of different segmentation accuracy metrics.

Analysis of NVIDIA Optix Engine for Ray Tracing in SPECT

Looking for a student for the project: Analysis of NVIDIA Optix as a Ray Tracing platform for SPECT forward projection

Topic motivation

  • Ray tracing is massively used in videogames to determine what object within the scene should be shown in the viewpoint of the observer.
  • Furthermore Ray Tracing is also used to determine the shadows, lights and reflections to be portrayed in the screen.
  • Optix is an extremely powerful API designed by NVIDIA due to its modularity and its flexibility. In 2015, Optix was used to model a SPECT system, achieving a significant speed up over other simulation frameworks for the same task [1]

Project description

The project would consist of five parts:

  • Part I: Set up Optix as a ray tracing framework for nuclear imaging, without physics
  • Part II: Run a simulation with a simple SPECT parallel hole collimator
  • Part III: Set up Optix as a ray tracing framework for nuclear imaging, with physics
  • Part IV: Validation of the tool with simulated data from SIMIND (data provided)
  • Part V: Validation of the tool with data acquired from a system (data provided)

Success measurements:

  • Project would be considered successful after step II
  • At step IV, it could become a conference paper.

 

Other information:

  • Topic can be 5 or 10 ECTS Research/Master Project. Can also be extended to a thesis.
  • Contact: maximilian.reymann@fau.de
  • Applicants ideally have experience with C++ or GPU programming, or are looking to gain expertise in these areas.

GAN Generated Model Observer for one Class Detection in SPECT Imaging

CNN-Based Projected Gradient Descent for Consistent CT Image Reconstruction