Index

Entwicklung von Prozessabläufen für die Forschungszusammenarbeit in datengetriebenen und institutionsübergreifenden Forschungsprojekten

Der vorliegende Artikel adressiert die Entwicklung von Prozessabläufen in institutionsübergreifenden und datengetriebenen Forschungsprojekten. Hierbei wird die Frage behandelt, ob eine Standardisierung der Prozessketten möglich ist, inwiefern Governance-Strukturen der Medizininformatikinitiative (MII) für datengetriebene Forschungsvorhaben inkludiert werden können und abschließend, ob somit die Handlungssicherheit für Beteiligte erhöht werden kann. Hierfür wurden Ist-Abläufe innerhalb durchgeführter Kooperationen mit empfohlenen Standards der MII abgeglichen und mithilfe des Knowhows beteiligter Mitarbeiter in Prozessketten überführt. Es konnten so Prozessabläufe entwickelt werden, die durch kaskadierende Prozessketten, Erläuterungen und Checklisten eine standardisierte Handreichung für Kooperationsprojekte bilden. Ebenfalls können durch die Dokumente zukünftig Fehler innerhalb der einzelnen Prozesselemente vermieden werden und Kooperationsprojekte einfacher, zielorientierte und übersichtlicher durchgeführt werden.

Graph Augmentation using Cond.-GANs

Post-Processing of DTF-Skeletonizations

Cephalometric Landmark Re-annotation and Automatic Detection

Manifold Forests

Random Forests for Manifold Learning

 

Description: There are many different methods for manifold learning, such as Locally Linear Embedding, MDS, ISOMAP or Laplacian Eigenmaps. All of them use a type of local neighborhood that tries to approximate the relationship of the data locally, and then try to find a lower dimensional representation which preserves this local relationship. One method to learn a partitioning of the feature space is by training a density forest on the data [1]. In this project the goal is to implement a Manifold Forest algorithm that finds a 1-D signal of length N in a series of N input images by learning a density forest on the data and afterwards applying Laplacian Eigenmaps on the data. For this, existing frameworks, like [2], [3], or [4] can be used as forest implementation. The Laplacian Eigenmaps algorithm is already implemented and can be integrated.

The concept of Manifold Forests is also introduced in the FAU lecture Pattern Analysis by Christian Riess, which makes candidates who have already heard this lecture preferred.

This project is intended for students wanting to do a 5 ECTS sized module like a research internship, starting now or asap. The project will be implemented in Python.

 

References:

[1]: Criminisi, A., Shotton, J., & Konukoglu, E. (2012). Decision Forests: A Unified Framework for Classification, Regression, Density Estimation, Manifold Learning and Semi-Supervised Learning. Foundations and Trends® in Computer Graphics and Vision, 7(2–3), 81–227. ; https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CriminisiForests_FoundTrends_2011.pdf

[2]: https://github.com/CyrilWendl/SIE-Master

[3]: https://github.com/ksanjeevan/randomforest-density-python

[4]: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomTreesEmbedding.html#sklearn.ensemble.RandomTreesEmbedding

 

Transfer Learning for Re-identification on Chest Radiographs

Helical CT Reconstruction with Bilateral Sinogram/Volume Domain Denoisers

Helical CT is the most commonly used CT scan protocol in clinical CT today. Helical CT generally applies a cone-beam scan in a spiral trajectory over the object to be scanned. The collected sinograms, and subsequently reconstructed volumes, contain some amount of noise due to fluctuations in the line integrals. Removing this noise is necessary for diagnostic image quality.

In previous research, we have developed a method, based on reinforcement learning, to denoise cone-beam CT. This method involved the use of denoisers in both the sinogram and the reconstructed image domain. The denoisers are bilateral filters with the sigma parameters tuned by a convolutional agent. The reconstruction was carried out by the FDK algorithm in the ASTRA toolbox.

Due to the lack of time, we had limited our previous research to the simpler problem of circular cone-beam CT. In this research internship, we hope to extend our method to denoise helical CT as well. Since helical CT uses cone-beam projections, we hope that our method will work out of the box without any retraining being needed.

The following tasks are to be conducted as part of this research internship:

  1. Develop methods to reconstruct helical CT for the given sinograms i.e. ADMM, WFBP
  2. Formulate and train a reinforcement learning task to train denoisers for helical CT in sinogram and volume domain
  3. Figure out ways to train tasks without ground truth volumes, to obtain image quality better than currently existing methods
  4. Train current volume based neural network solutions (GAN-3D, WGAN-VGG, CPCE3D, QAE, etc.) and compare the solutions.

Requirements:

  • Knowledge of CT reconstruction techniques
  • Understanding of reinforcement learning
  • Experience with PyTorch for developing neural networks
  • Experience with image processing. Knowledge of the ASTRA toolbox is a plus.

Deep Learning based Model Observers for Multi-modal Imaging

Task based measurements are needed to measure the quality of medical images. Common task based measures include the use of model observers. Model observers are used to measure the confidence that an object (eg. A tumor or another structure) is present in a particular image.  Common model observers in medical imaging include the Chanellised Hotelling Observer for CT image quality, or the Non Pre Whitening Filter for SPECT image quality.

Current implementations of model observers for task based measurements are executed on phantoms. The use of phantoms makes the use of task based measurements an SKE/BKE task. However, this means that the use of task based measurements cannot be directly moved into a clinical task without prior knowledge. Moreover, multiple noise realisations of a single phantom are needed to get meaningful results from a model observer.

Deep learning has been used to replicate the behaviour of model observers. However, there is no work done on a general model observer which can work across imaging modalities. In this project, we would be interested in investigating the following:

  1. The possibility of using deep learning to create a ‘general’ model observer
  2. Cross modality performance of a deep learned model observer
  3. Training this model observer with zero-, one-, or few- shot learning for greater future generalisation.

We would look for someone who could support us with the following skills:

  1. Knowledge in Python/C++ programming
  2. Some knowledge on image processing. Medical image processing and DICOM standards are a plus.
  3. Knowledge of relevant libraries like NumPy, OpenCV, PyTorch, TensorFlow
  4. Experience with model observers is a plus (not strictly necessary)

Augmentation of CT Images by Variation of Non-Rigid Deformation Vector Field Amplitudes

Synergistic Radiomics and CNN Features for Multiparametric MRI Lesion Classification

Breast cancer is the most frequent cancer among women, impacting 2.1 million women each year. In order to assist in diagnosing patients with breast cancer, to measure the size of the existing breast tumors and to check for tumors in the opposite breast, breast magnetic resonance imaging (MRI) can be applied. MRI enjoys the advantages that patients won’t suffer from ionizing radiation during the examination, and it can capture the entire breast volume. In the meanwhile, machine learning methods have been proved to accurately classify images by assigning the probability score to estimate the likelihood of an image belonging to a certain category in many fields. With the properties mentioned above, this project aims to investigate whether applying machine learning approaches to breast tumor MRI can provide an accurate prediction on the tumor type (malignant or benign) for the diagnosing purpose.