• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
Friedrich-Alexander-Universität Pattern Recognition Lab PRL
  • FAUTo the central FAU website
Suche öffnen
  • Campo
  • StudOn
  • FAUdir
  • Jobs
  • Map
  • Help
Friedrich-Alexander-Universität Pattern Recognition Lab PRL
Navigation Navigation close
  • Lab
    • News
    • Cooperations
    • Join the Pattern Recognition Lab
    • Ph.D. Gallery
    • Contact
    • Directions
  • Team
    • Our Team
    • Former PRL members
  • Research
    • Research Groups
    • Research Projects
    • Publications
    • Competitions
    • Datasets
    • Research Demo Videos
    • Pattern Recognition Blog
    • Beyond the Patterns
  • Teaching
    • Curriculum / Courses
    • Lecture Notes
    • Lecture Videos
    • LME Videos
    • Thesis / Projects
  1. Home
  2. Research
  3. Research Groups
  4. Learning Algorithms for Medical Big Data Analysis (LAMBDA)
  5. Learning Approach for Segmentation

Learning Approach for Segmentation

In page navigation: Research
  • Beyond the Patterns
  • Competitions
  • Publications
  • Datasets
  • An AI-based framework for visualizing and analyzing massive amounts of 4D tomography data for beamline end users
  • An AI-based framework for visualizing and analyzing massive amounts of 4D tomography data for beamline end users
  • An AI-based framework for visualizing and analyzing massive amounts of 4D tomography data for beamline end users

Learning Approach for Segmentation

Multi-organ segmentation

In order to detect lesions on medical images, deep learning models commonly require information about the size of the lesion, either through a bounding box or through the pixel-/voxel-wise annotation of the lesion, which is in turn extremely expensive to produce in most cases.

In this paper, we aim at demonstrating that by having a single central point per lesion as ground truth for 3D ultrasounds, accurate deep learning models for lesion detection can be trained, leading to precise visualizations using Grad-CAM. From a set of breast ultrasound volumes, healthy and diseased patches were used to train a deep convolutional neural network. On the one hand, each diseased patch contained in its central area a lesion’s center annotated by experts. On the other hand, healthy patches were extracted from random regions of ultrasounds taken from healthy patients.In order to detect lesions on medical images, deep learning models commonly require information about the size of the lesion, either through a bounding box or through the pixel-/voxel-wise annotation of the lesion, which is in turn extremely expensive to produce in most cases.

In this paper, we aim at demonstrating that by having a single central point per lesion as ground truth for 3D ultrasounds, accurate deep learning models for lesion detection can be trained, leading to precise visualizations using Grad-CAM. From a set of breast ultrasound volumes, healthy and diseased patches were used to train a deep convolutional neural network. On the one hand, each diseased patch contained in its central area a lesion’s center annotated by experts. On the other hand, healthy patches were extracted from random regions of ultrasounds taken from healthy patients.


Friedrich-Alexander-Universität Erlangen-Nürnberg
Lehrstuhl für Mustererkennung (Informatik 5)

Martensstr. 3
91058 Erlangen
  • Contact
  • Login
  • Intranet
  • Imprint
  • Privacy
  • Accessibility
  • RSS Feed
  • Instagram
  • TikTok
  • Mastodon
  • BlueSky
  • YouTube
  • Facebook
  • Xing
  • LinkedIn
  • Community
  • Threads
Up