Index

Heat Demand Forecasting with Multi-Resolutional Representation of Heterogeneous Temporal Ensemble

Accurate forecasting of heat consumption plays an important role in effective management of heating utility, such as, unit commitment, short term maintenance, network’s power flow optimization, etc. Inaccurate forecasting of consumption may lead to increase in operating cost. Over-forecasting leads to unnecessary reserved cost and excess supply. Under forecasted loads result in high expenditures in the peaking unit. Hence it is important for a utility, such as a district heating network to forecast Heat consumption patterns could vary depending upon the external temperature and day of usage, such as holiday or weekend.

The consumption pattern also varies based upon consumer type, operational reasons and consumer activities. This gives rise to the motivation of capturing the load profile feature of a certain consumer to tackle model variance when using ML based forecasters. This means, the forecasting model should capture information (feature) of a user’s usage/consumption pattern from three dimensions: Time, Frequency and Magnitude. The time series of heat consumption consist of several discontinuities or abrupt jumps which may carry important information. Therefore a highly accurate prediction of heat consumption of an end-user could be yielded by incorporating the discontinuities through the approximation of functional non-linearity. Moreover, the consumption pattern also varies based upon consumer type, operational reasons and consumer activities. Therefore in order to ensure the generalizability of the model across different types of end-users the forecasting model should capture features or information about the consumption pattern of different types of end-users. This project also investigates the research question of generalizability of the model through the evaluation of performance quantitatively and qualitatively. The thesis consists of the following aspects:

  • Literature review of heat consumption classification in district heating network.
  • Analysis and understanding of the heat consumption data from a utility.
  • Development of a sophisticated heat consumption forecaster.
  • Comprehensive evaluation of the forecasting performance by comparing with existing forecasting models.

References:
[1] Y. Zhao, Y. Shen, Y. Zhu and J. Yao, “Forecasting Wavelet Transformed Time Series with Attentive Neural Networks,” 2018 IEEE International Conference on Data Mining (ICDM), 2018, pp. 1452-1457, doi: 10.1109/ICDM.2018.00201.
[2] Chatterjee, Satyaki and Bayer, Siming and Maier, Andreas K “Prediction of Household-level Heat-Consumption using PSO enhanced SVR Model”, NeurIPS 2021 Workshop on Tackling Climate Change with Machine Learning, 2021, https://www.climatechange.ai/papers/neurips2021/42
[3] Kováč, Szabolcs and Micha’čonok, German and Halenár, Igor and Važan, Pavel,” Comparison of Heat Demand Prediction Using Wavelet Analysis and Neural Network for a District Heating Network”, Special Issue “Artificial Intelligence in the Energy Industry”, https://www.mdpi.com/1996-1073/14/6/1545

Design and Evaluation of Machine Learning Applications for Space Systems

This thesis aims at designing and evaluating two open source and representative machine learning applications for on-board data processing in space systems, as part of the OBPMark-ML benchmarking suite. Recently there is an increased interest for the adoption of machine learning and artificial intelligence methods in on-board processing, as demonstrated in the European Space Agency’s (ESA) Phi-Sat-1 In-Orbit-Demonstration (IOD) mission launched in 2020 [1,2]. In addition, future missions, are expected to rely on machine learning and deep learning methods to offer increased autonomy.

However, it is not clear which hardware architectures should be employed in such future systems. Currently, space systems use simple processors specifically designed for space, which cannot provide the required performance. Therefore, several alternatives are currently investigated as future candidates for space, such as embedded GPUs, FPGAs, custom AI accelerators etc.
However, different devices have significantly different properties, e.g. in terms of number of computations per second, numerical format (integer or floating point), data width (1, 8, 16, 32 or 64 bits), memory requirements etc. which make the comparison and trade-offs of their computation performance and accuracy difficult.

In addition, there is a lack of representative application cases that can be used to perform such comparison. MLPerf, the de facto benchmarking suite for machine learning is not representative of the type of processing required in space. The only available space-related software is the open source benchmarking suite GPU4S\_Bench [3,4] (also known as OBPMark Kernels) and its evolution OBPMark (On-Board Data Processing Benchmarks) Applications, developed at the Barcelona Supercomputing Center (BSC) as part of the ESA-funded project GPU4S (GPU for Space)[5]. However, GPU4S\_Bench provides software kernels solely on algorithmic building blocks such as matrix multiplication and convolution used in deep learning, as well as a simple inference chain targeting CIFAR-10, but without evaluating performance and accuracy trade offs, nor using real space data sets for training and/or evaluation. In contrast to this, the OBPMark suite implements a set of computational performance benchmarks developed specifically for spacecraft on-board data processing applications, like radar processing and data compression [6]. OBPMark-ML is going to be a third variant of OBPMark, which will include realistic space applications covering multiple types of machine learning and deep learning processing.

Two of these applications are going to be designed and implemented in this Master’s Thesis, which will be performed during a research visit at the Barcelona Supercomputing Center, within the GPU4S ESA project. The applications will cover two different types of imaging tasks and will be trained with real space data. The design and implementation will cover both the training and the implementation parts of the applications in standard machine learning frameworks, both in Python and C. It will also include the production of the trained models in various formats, as well as the necessary material for the reproduction of the training for potentially new architectures which are not covered by the pre-trained models which will be generated. In particular:

  • Instance Segmentation: Cloud Screening: The first task uses instance segmentation for cloud detection on an open source data set called Cloud95 [7]. U-Nets have become a standard approach for segmentation tasks and were also shown to be effective on cloud screening tasks [8]. Constraints for the use in on board processing are the enormous amount of parameters and the high computational cost, though. A good trade-off between computational complexity/memory footprint and prediction accuracy has to be found. For this the number of parameter needs to be scaled down. This can be achieved through a reduction of the depth and the number of filters in convolution layers. Another method is using techniques from the MobileNetv2 architecture and adapt them to the U-Net architecture [9]. Both methods will be evaluated.
  • Object Detection: Ship Detection: The second application will be an object detection task, like ship detection on satellite pictures [10]. The same restrictions like in the segmentation case apply here. State-of-the-art architectures for object detection are single shot detectors (SSD) [11] and “You only look once” (YOLO) networks [12]. These architectures can use different backbones models. To reduce the number of parameters and the computational complexity, a MobileNet will be used in comparison to a heavy network like ResNet [13].

The models will be trained on Tensorflow/Keras and PyTorch, as well as converted to the ONNX format for portability and reproducibility. As different hardware like i.e. FPGAs require fixed point arithmetic, all models will be trained and provided in different precisions ranging from (double/full/half) floating point to integer (int8/int16). BSC will provide access to supercomputing resources for the training process. Accuracy-wise, they are to be compared qualitatively with literature results. The computational performance will be tested on some processor of interest, like i.e. the Myriad VPU or a Xilinx FPGA leveraging Vitis AI or embedded GPUs which have been identified as candidates for use in GPU4S. This depends of the accessibility of these development boards. The thesis is developed together with the Barcelona Computer Center (BSC) in the GPU4S program co-funded by the European Space Agency and all code and models will be published open source on Github under the current ESA-PL license.

 

[1] Jan-Gerd Meß, Frank Dannemann and Fabian Greif. Techniques of Artificial Intelligence for Space Applications – A Survey. In European Workshop on On-Board Data Processing (OBDP), 2019.

[2] https://www.esa.int/Applications/Observing_the_Earth/Ph-sat

[3] Ivan Rodriguez, Leonidas Kosmidis, Jerome Lachaize, Olivier Notebaert, David Steenari, GPU4S Bench: Design and Implementation of an Open GPU Benchmarking Suite for Space On-board Processing, Technical Report UPC-DAC-RR-CAP-2019-1, [online] Available: https://www.ac.upc.edu/app/research-reports/public/html/research_center_index-CAP-2019,en.html

[4] Leonidas Kosmidis, Iván Rodriguez, Alvaro Jover-Alvarez, Sergi Alcaide, Jérôme Lachaize, Olivier Notebaert, Antoine Certain, David Steenari. GPU4S: Major Project Outcomes, Lessons Learnt and Way Forward. Design Automation and Test in Europe Conference (DATE) 2021

[5] David Steenari, Leonidas Kosmidis, Ivan Rodriquez, Alvaro Jover, and Kyra Förster. OBPMark (On-Board Processing Benchmarks) – Open Source Computational Performance Benchmarks for Space Applications. In European Workshop on On-Board Data Processing (OBDP), 2021. [online], Available: https://zenodo.org/record/5638577

[6] OBPMark and GPU4S\_Bench open source repositories. [online] Available: https://obpmark.github.io

[7] https://www.kaggle.com/sorour/95cloud-cloud-segmentation-on-satellite-images (accessed on 1.2.2022)

[8] Johannes Drönner, Nikolaus Korfhage, Sebastian Egli, Markus Mühling, Boris Thies, Jörg Bendix, Bernd Freisleben and Bernhard Seeger. Fast Cloud Segmentation Using Convolutional Neural Networks. Remote Sens. 2018

[9] Junfeng Jing, Zhen Wang, Matthias Rätsch and Huanhuan Zhang. Mobile-Unet: An efficient convolutional neural network for fabric defect detection. Textile Research Journal, 2020.

[10]https://www.kaggle.com/c/airbus-ship-detection/overview

[11] Liu W. et al. SSD: Single Shot MultiBox Detector. Computer Vision – ECCV 2016, 2016.

[12] Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. CoRR 2015, 2015.

[13] MobileNetV2: The Next Generation of On-Device Computer Vision Networks, [online] Available: https://ai.googleblog.com/2018/04/mobilenetv2-next-generation-of-on.html.

Learning based methods for 3D hemodynamics estimation in the cerebral vasculature

CT Projection Inpainting Using Denoising Diffusion Probabilistic Models

Modeling of Randomized Cerebrovascular Trees for Artifical Data Generation using Blender

 

Thesis Description

The advent of deep learning in recent years has led to a multitude of new practical applications of machine learning in many fields, including the medical domain [1]. A main limiting factor in
the implementation of deep learning algorithms for healthcare applications is the availability of representative training datasets of sufficient size [2].

Solving the data scarcity problem may prove particularly beneficial in the case of stroke. Globally, stroke is the leading cause of serious adult disability and the second-leading cause of death [3]. The main structure to distribute blood flow to the brain is the Circle of Willis (CoW) [3, 4]. Several anatomical variations of the CoW can be observed in the population [3, 4]. These normvariants differ in the frequency they appear in, leading to only 40% of the population possessing a wellformed complete CoW [4].

The multitude of CoW normvariants further exacerbates the need for more cerebrovascular training data for tasks like vessel labeling in stroke cases [5]. It may be possible to alleviate this problem by generating artificial data. The open-source 3D graphics software Blender appears suitable for this.

The aim of this thesis is to model a CoW graph which can be randomly and realistically deformed while being able to probabilistically incorporate common normvariants. The thesis shall comprise the following points:

  1. Literature research regarding the distribution of CoW normvariants in the population
  2. Model a standard variant of the cerebrovascular tree in Blender
  3. Create artificial trees by randomly sampling its parameter values
  4. Probabilistically adapt model to normvariants

 

References

[1] Neha Sharma, Reecha Sharma, and Neeru Jindal. Machine learning and deep learning applications-a vision. Global Transitions Proceedings, 2(1):24–28, 2021. 1st International Conference on Advances in Information, Computing and Trends in Data Engineering (AICDE – 2020).

[2] Martin J. Willemink, Wojciech A. Koszek, Cailin Hardell, Jie Wu, Dominik Fleischmann, Hugh Harvey, Les R. Folio, Ronald M. Summers, Daniel L. Rubin, and Matthew P. Lungren. Preparing medical imaging data for machine learning. Radiology, 295(1):4–15, 2020.

[3] Mohammed Oumer and Mekuriaw Alemayehu. Association between circle of willis and ischemic stroke: a systematic review and meta-analysis. BMC Neuroscience, 22(10), 2021.

[4] Debanjan Mukherjee, Neel D. Jani, Jared Narvid, and Shawn C. Shadden. The role of circle of willis anatomy variations in cardio-embolic stroke – a patient-specific simulation based study.
bioRxiv, 2018.

[5] Florian Thamm, Markus J¨urgens, Hendrik Ditt, and Andreas Maier. VirtualDSA++: Automated Segmentation, Vessel Labeling, Occlusion Detection and Graph Search on CTAngiography Data. In Barbora Kozl´ıkov´a, Michael Krone, Noeska Smit, Kay Nieselt, and Renata Georgia Raidou, editors, Eurographics Workshop on Visual Computing for Biology and Medicine. The Eurographics Association, 2020.

 

Cone-Beam CT X-Ray Image Simulation for the Generation of Training Data

Project Description Download

Description

Deep Learning methods can be used to reduce the severity of Metal Artefacts in Cone-Beam CT images. This thesis aims to design and validate a simulation pipeline, which creates realistic X-Ray projection images from available CT volumes and metal object meshes. Additionally, 2D and 3D ground truth binary masks should provide a segmentation of metal to be used as ground truth during training. The explicit focus of the data generation will be placed on the accuracy of the Metal Artefacts.

Your qualifications

  • Fluent in Python and/or C++
  • Knowledge of Homogenous Coordinates and Projective Mapping
  • Interest in Quality Software Development / Project Organisation
  • Experience with CUDA and interface to C++ / Python (optional, big plus)

You will learn

  • to organize a short-term project (report status and structured sub-goals)
  • to scientifically evaluate the developed methods
  • to report scientific findings in a thesis / a publication

 

The thesis is funded by Siemens Healthineers and can be combined with a working student position prior to or after the thesis (up to 12 h/week). If interested, please write a short motivational email to Maxi.Rohleder@fau.de highlighting your qualifications and describe one related code project you are proud of. Please also attach your CV and transcript of records from your current and previous studies.

Cardiac Functional Analysis – Automated Strain Analysis of the left Ventricle using Computed Tomography

The heart is one of the most important organs within the human body. It maintains the blood circulation
which is crucial to transport substances through the body to e.g. oxygenate the brain or muscles.
Dysfunctions can lead to death. Cardiovascular and circulatory diseases are one of the most common
causes of mortality [1]. To indicate such a disease a cardiac functional analysis is often performed. A
cardiac functional analysis assesses the vitality of the heart based on objective criteria such as ejection
fraction and motion evaluation. This analysis helps to identify, localize and treat dysfunctions. Especially
tracing the dynamics of the heart, referred to as myocardial strain analysis, gives diagnostic
hints. The movement of the heart and each chamber can be tracked using cardiac imaging. There are
three possible directions for the motion: radial, longitudinal and circumferential, which can be seen
as intrinsic dynamics. For instance in case of acute myocarditis the intrinsic dynamics differ significantly
from those of healthy patients. As the left ventricle is the largest of the four heart chambers –
dysfunctions might have the highest impact on its function and therefore on its dynamics [2, 3].
To perform a cardiac functional analysis or to be precise a strain analysis of the left ventricle noninvasive
imaging modalities are used. Cardiac magnetic resonance and echocardiography are commonly
used and build the state of the art methods [4, 5]. Up to now computed tomography is barely used for
assessing the cardiac function but more for identifying diseases as coronary atherosclerosis [6]. The
myocardium tends to have a low contrast in computed tomography images of the heart. Therefore
tracking of endo- and epicardium becomes more complicated. Due to this, the complexity of performing
a cardiac functional analysis increases. But aspects like high resolution, time- and cost efficiency
motivate to overcome these difficulties.
Within this project it will be investigate whether computed tomography is capable to perform a
cardiac functional analysis. The goal is to automate the strain analysis of the left ventricle using
computed tomography.
Given several 3D CT images of a heartbeat cycle the strain analysis will be performed in the following
way:
1. The myocardium of the left ventricle has to be localized and segmented using an active shape
model.
2. For the motion tracking different registration algorithms will be evaluated. The range of approaches
is broad. Possible methods for landmark based registration would be thin-plate splines [7]. As the
complexity increases with the dimension 2D intensity based registration could be performed on sliced
images [8]. Using the whole 3D image there are more classic algorithms as well as deep learning based
approaches are conceivable [9, 10].
3. The intrinsic dynamics, namely longitudinal, radial and circumferential strain, are calculated by
projecting the transformation vectors provided by the registration algorithm onto the main axes of an
intrinsic coordinate system.
4. The results will be visualized as color coded strain magnitude within the CT image or in a polar
map [3].
5. The results of different registration algorithms are qualitatively and quantitatively compared using
the visualizations as well as different metrics, possibly such as the correlation coefficient [11] and the
Hausdorff distance [12].

References
[1] Gregory A Roth et al. Global burden of cardiovascular diseases and risk factors, 1990–2019:
update from the gbd 2019 study. Journal of the American College of Cardiology, 76(25):2982–
3021, 2020.
[2] Otto A Smiseth, Hans Torp, Anders Opdahl, Kristina H Haugaa, and Stig Urheim. Myocardial
strain imaging: how useful is it in clinical decision making? European Heart Journal, 37(15):1196–
1207, 2016.
[3] Aldostefano Porcari et al. Strain analysis reveals subtle systolic dysfunction in confirmed and
suspected myocarditis with normal lvef. a cardiac magnetic resonance study. Clinical Research
in Cardiology, 109(7):869–880, 2020.
[4] Dagmar F Hernandez-Suarez and Angel L´opez-Candales. Strain imaging echocardiography: what
imaging cardiologists should know. Current Cardiology Reviews, 13(2):118–129, 2017.
[5] Alessandra Scatteia, Anna Baritussio, and Chiara Bucciarelli-Ducci. Strain imaging using cardiac
magnetic resonance. Heart Failure Reviews, 22(4):465–476, 2017.
[6] Marc R Dweck, Michelle C Williams, Alastair J Moss, David E Newby, and Zahi A Fayad.
Computed tomography and cardiac magnetic resonance in ischemic heart disease. Journal of the
American College of Cardiology, 68(20):2201–2216, 2016.
[7] Rainer Sprengel, Karl Rohr, and H Siegfried Stiehl. Thin-plate spline approximation for image
registration. In Proceedings of 18th annual international conference of the IEEE Engineering in
Medicine and Biology Society, volume 3, pages 1190–1191. IEEE, 1996.
[8] Christoph Guetter, Hui Xue, Christophe Chefd’Hotel, and Jens Guehring. Efficient symmetric
and inverse-consistent deformable registration through interleaved optimization. In 2011 IEEE
international symposium on biomedical imaging: from nano to macro, pages 590–593. IEEE, 2011.
[9] Huajun Song and Peihua Qiu. Intensity-based 3d local image registration. Pattern Recognition
Letters, 94:15–21, 2017.
[10] Guha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca. Voxelmorph:
a learning framework for deformable medical image registration. IEEE Transactions on Medical
Imaging, 38(8):1788–1800, 2019.
[11] Richard Taylor. Interpretation of the correlation coefficient: a basic review. Journal of Diagnostic
Medical Sonography, 6(1):35–39, 1990.
[12] D.P. Huttenlocher, G.A. Klanderman, and W.J. Rucklidge. Comparing images using the hausdorff
distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(9):850–863, 1993.

Thrombus detection using nnDetection

Efficient Methods for Post Myocardial Infarction Ventricular Tachycardia Modeling: from Image Processing to Electrophysiological Simulation

Cardiovascular diseases are the leading cause of death worldwide with an estimate of 17.9 million in
2019, as reported by theWorld Health Organization (WHO) [1]. Among the numerous diseases included
in this category, ischemic heart disease is one of the most frequent, with an estimate of 8.8 million deaths
in 2019 [1]. A common result of ischemic heart disease is myocardial infarction (MI), which occurs when
the blood flow to an area of the heart is blocked. This damages the heart tissue resulting in necrosis
[2]. Approximately 10% of the patients that survive a previous MI event have an increased risk of
death in the following months or years after hospital discharge. In these cases, up to 50% of the deaths
can be secondary to a sustained Ventricular Tachycardia (VT) or Ventricular Fibrillation (VF) event [2].
Ventricular Tachycardia is a heart arrhythmia that occurs when a fast and abnormal heart rate originates
in the ventricles. A well-known mechanism for VT is an action potential wave re-entry caused by
a unidirectional conduction block in slow conductive areas of the myocardium [2]. These areas contain
a complex mixture of scar (i.e. infarcted tissue) and surviving myocytes that is often referred to as
heterogeneous or border zone tissue [3]. For these cases, ablation therapy is the preferred surgical approach.
Approximately 50% of the patients that undergo ablation therapy show VT recurrence before
five years after surgery [4]. In order to improve the outcome, a critical part of the procedure is to
properly localize and ablate the arrhythmic substrate [4].
In this context, it is hypothesized that a combination of scar lesion imaging, cardiac electrophysiology
modeling, and artificial intelligence, can improve the localization of VT ablation targets, the detection
of incomplete ablation procedures, and the selection of the ablation strategy. Previous studies have
already investigated the usage of digital twin technologies for VT therapy planning [3, 5, 6]. However,
the VT modeling pipeline is extensive and still presents several challenges.
First, the manual segmentation of the heart chambers from Late Gadolinium Enhanced (LGE) MRI
images is a tedious procedure that is prone to inter- and intra-operator variability [7]. The segmentation
of the heart chambers is a necessary step for the generation of anatomical 3D models. Furthermore,
it allows to analyze, locate, and quantify myocardial scar, which can be used to guide the ablation
procedure [8]. Lastly, simulating VT in-silico is very dependant on the selected model and simulation
parameters. Previous studies have addressed how different parameter combinations affect the
inducibility of VT re-entrant activity [5, 6]. These studies usually rely on finite element method (FEM)
simulations on very detailed geometries, which can require up to several hours of run-time per simulation.
This inevitably constrains the number of parameter combinations that can be studied.
With these challenges in mind, the main contributions of this study will be:
• Literature review of the state-of-the-art methods in Late Gadolinium Enhanced (LGE) image
processing and segmentation.
• Literature review of the state-of-the-art methods for electrophysiology modeling and simulation
of virtual VT inducibility.
• Evaluation of a deep learning method for automatic myocardium segmentation from LGE images,
with a possible extension to automatically locate and quantify scar tissue. This automatic
segmentation method will focus on the potential advantages compared to manual segmentation
approaches in terms of reproducibility and time savings [7].
• Study of virtual VT inducibility in a set of porcine models after MI with focus on optimal selection
of model parameters. This task will be carried out using the Lattice-Boltzmann method, a monodomain
solver which allows to perform electrophysiology simulations of VT re-entrant activity,
with the advantage of being faster than other FEM approaches [9].
References
[1] World Health Statistics 2021: Monitoring Health for the SDGs: Sustainable Development Goals. Geneva, Switzerland:
World Health Organization, 2021. Licence: CC BY-NC-SA 3.0 IGO.
[2] J. Bhar-Amato, W. Davies, and S. Agarwal, “Ventricular arrhythmia after acute myocardial infarction: ”the perfect
storm”,” Arrhythmia & Electrophysiology Review, vol. 6, no. 3, p. 134, 2017.
[3] H. Ashikaga, H. Arevalo, F. Vadakkumpadan, R. C. Blake, J. D. Bayer, S. Nazarian, M. Muz Zviman, H. Tandri,
R. D. Berger, H. Calkins, D. A. Herzka, N. A. Trayanova, and H. R. Halperin, “Feasibility of image-based simulation
to estimate ablation target in human ventricular arrhythmia,” Heart Rhythm, vol. 10, pp. 1109–1116, Aug. 2013.
[4] M. Wolf, F. Sacher, H. Cochet, and T. Kitamura, “Long-term outcome of substrate modification in ablation of
postˆamyocardial infarction ventricular tachycardia,” Circulation: Arrhythmia and Electrophysiology, vol. 11, Feb.
2018.
[5] F. O. Campos, J. Whitaker, R. Neji, S. Roujol, M. OˆaNeill, G. Plank, and M. J. Bishop, “Factors promoting conduction
slowing as substrates for block and reentry in infarcted hearts,” Biophysical Journal, vol. 117, pp. 2361–2374, Dec.
2019.
[6] A. Lopez-Perez, R. Sebastian, M. Izquierdo, R. Ruiz, M. Bishop, and J. M. Ferrero, “Personalized cardiac computational
models: From clinical data to simulation of infarct-related ventricular tachycardia,” Frontiers in Physiology,
vol. 10, p. 580, May 2019.
[7] Y. Wu, Z. Tang, B. Li, D. Firmin, and G. Yang, “Recent advances in fibrosis and scar segmentation from cardiac mri:
A state-of-the-art review and future perspectives,” Frontiers in Physiology, vol. 12, p. 709230, Aug. 2021.
[8] S. Toupin, T. Pezel, A. Bustin, and H. Cochet, “Whole-heart high-resolution late gadolinium enhancement: Techniques
and clinical applications,” Journal of Magnetic Resonance Imaging, p. jmri.27732, June 2021.
[9] S. Rapaka, T. Mansi, B. Georgescu, M. Pop, G. A.Wright, A. Kamen, and D. Comaniciu, “Lbm-ep: Lattice-boltzmann
method for fast cardiac electrophysiology simulation from 3d images,” in Medical Image Computing and Computer-
Assisted Intervention – MICCAI 2012, vol. 7511, pp. 33–40, Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Fruit Terminator – Annotation of Lung Fluid Cells via Gamification