Index
Investigating the Possibilities of CT Reconstruction using Fourier Neural Operator
Introduction:
The aim of this project is to explore the potential of using Fourier Neural Operators (FNO) for Computed Tomography(CT) reconstruction tasks. FNO is a novel deep learning framework designed to approximate the solution of Partial Differential Equations (PDEs) by learning continuous operators using a Fourier basis. On the other hand, CT reconstruction is an important process where cross-sectional images of the objects are generated from projections obtained from X-ray scans.
CT reconstruction fundamentally represents an inverse problem, with the objective of recovering the original image from the projection data. This process can be mathematically construed through the lens of a PDE, wherein the Radon transform and its inversion emerge as central figures. By conceptualizing CT reconstruction as a PDE-solving endeavor, we aim to harness the potential of FNO to devise efficient and accurate reconstruction algorithms.
Requirements:
- Completion of at least one course on Deep Learning is mandatory.
- Proficiency in PyTorch is essential.
- Strong analytical and problem-solving skills.
Prospective candidates are warmly invited to send their CV and transcript to yipeng.sun@fau.de.
References:
- Li, Zongyi, et al. “Fourier neural operator for parametric partial differential equations.” arXiv preprint arXiv:2010.08895 (2020).
- Ongie, Gregory, et al. “Deep learning techniques for inverse problems in imaging.” IEEE Journal on Selected Areas in Information Theory 1.1 (2020): 39-56.
Project SENSATION: Sidewalk Environment Detection System for Assistive NavigaTION
In the project entitled Sidewalk Environment Detection System for Assistive NavigaTION (hereinafter referred to as SENSATION), our research team is meticulously advancing the development of the components of SENSATION. The primary objective of this venture is to enhance the mobility capabilities of blind or visually impaired persons (BVIPs) by ensuring safer and more efficient navigation on pedestrian pathways.
For the implementation phase, a specialized prototype was engineered: a chest-bag equipped with an NVIDIA Jetson Nano, serving as the core computational unit. This device integrates a several sensors including, but not limited to, tactile feedback mechanisms (vibration motors) for direction indication, optical sensors (webcam) for environmental data acquisition, wireless communication modules (Wi-Fi antenna) for internet connectivity, and geospatial positioning units (GPS sensors) for real-time location tracking.
Despite the promising preliminary design of the prototype, several technical challenges remain that demand investigation. These challenges are described as follows:
Sidewalk segmentation for direction estimation
To determine the location of a BVIP on the pedestrian pathway, it is imperative for our algorithms to achieve optimal segmentation of the sidewalk. To facilitate this, we continuously refine our proprietary dataset tailored to sidewalk segmentation. We are also exploring a variety of Deep Learning methodologies to enhance the accuracy of this segmentation. The primary objective in this topic is to refine our sidewalk segmentation pipeline and to comprehensively evaluate its performance using metrics such as Mean Intersection over Union (IOU), and precision metrics for both sidewalks and roads. Additionally, we employ Active Learning techniques to further analyze our dataset, aiming to gain a deeper insight into its characteristics.
Distance estimation for obstacles avoidance
To convey information to a BVIP regarding the presence of an impediment on the pedestrian pathway, obstacles such as bicycles, e-scooters, or automobiles must initially be identified via image segmentation techniques. After this identification, it is crucial to determine the distance from these detected objects. The SENSATION system employs a monocular camera to capture the surrounding environmental details of the pathway. In this domain of research, we are studying various algorithms tailored for depth estimation to determine the proximity to these impediments. The calculated distances are then conveyed to the BVIP through either tactile or auditory feedback mechanisms. A prominent challenge in this work lies in achieving precise distance measurements, particularly given the constraints of solely utilizing information from a monocular camera.
Drift correction to improve orientation of a BVIP
While navigating pedestrian pathways, it is occasionally observed that a BVIP may lose orientation with respect to the sidewalk. Addressing this, it is essential to devise a detection system capable of promptly identifying a BVIP’s deviation from the intended sidewalk. For the detection of such drifts, we employ Deep Learning algorithms that leverage optical flow or depth maps. The primary objective in this topic is to conceptualize and develop a drift correction mechanism utilizing either optical flow or depth maps to enhance a BVIP’s sidewalk orientation.
Environmental information by image captioning
To augment a BVIP’s comprehension of their surrounding environment, descriptive captions derived from environmental observations are beneficial. Examples of such captions include: “Traffic light located on your right,” “Staircase descending with a total of 5 steps,” and “Vehicle parked obstructing the sidewalk.”
In this topic, we are examining Deep Learning algorithms that possess the capacity to generate such descriptive annotations. Concurrently, we are refining our caption generation pipeline to ascertain the spectrum of captions that can be formulated to enhance the mobility and spatial understanding of a BVIP.
If you are interested in one of the above topics, please send your request to: hakan.calim@fau.de
For the development of the solutions, it will be beneficial to have experience with implementing neural networks in python with Pytorch or Tensorflow.
Neural fields for 2D-3D transformations
Fetal Re-Identification in Multiple Pregnancy Ultrasound Images Using Deep Learning
Word Embeddings Applied to Alzheimer’s Disease
Contrastive Learning for Glacier Segmentation
Learning Reconstruction Filters for CBCT Geometry
This project focuses on advancing computer tomography (CT) image reconstruction by utilizing neural network technology. The project is divided into three main sections.
In the first part, a versatile iterative reconstruction algorithm is introduced for Cone Beam CT (CBCT) reconstruction. This algorithm gradually converges to the actual values through backpropagation, using the disparity between reconstructed results and ground truth. The TV-Norm regularization method is also employed to reduce noise while preserving image edge clarity.
Recognizing the limitations of iterative reconstruction methods, the second part employs a data-driven CT reconstruction approach. A trainable filter-backprojection (FBP) reconstruction neural network is designed to enhance filter design, resulting in a filter with high-frequency noise suppression capabilities. This improves the quality of reconstruction.
In the third part, the data-driven FBCT approach is extended to the Feldkamp-Davis-Kress (FDK) reconstruction algorithm for CBCT. Through neural network training, a latent mapping is learned, progressively converging towards the Ram-lak filter. This sets the groundwork for learning complex filters tailored to non-circular trajectories.
Tackling Travelling Salesman Problem with Graph Neural Network and Reinforcement Learning
This project focuses on solving the Travelling Salesman Problem using Graph Neural Network combined with Reinforcement Learning algorithms. Two variants of Graph Neural Network are tested, including Graph Pointer Network and Hybrid Pointer Network, both trained in Actor-Critic algorithm and double Q-learning algorithm separately. Double Q-learning is tried carefully as it is rarely applied in the training of Graph Neural Network compared with Actor-Critic. The models are tested on various types of TSP instances, showing that double Q-learning algorithm is a potential competitor in the improvement of
Graph Neural Networks.
Synthetic Projection Generation with Angle Conditioning
Computed Tomography (CT) plays a vital role in medical imaging, offering cross-sectional views of internal structures. Yet, radiation exposure during CT scans poses health risks. This study explores the application of existing deep learning models to synthesize CT projections at unknown arbitrary angles. Multiple input images from varying angles, along with their corresponding ground truth data, train different network architectures to reproduce target images from different view angles. This approach potentially reduces radiation exposure and addresses challenges in obtaining specific missing angular views. Experimental results confirm the effectiveness and feasibility of the methodology, establishing it as a valuable tool in CT imaging.