• Jump to content
  • Jump to navigation
  • Jump to bottom of page
Simulate organization breadcrumb open Simulate organization breadcrumb close
Pattern Recognition Lab
  • FAUTo the central FAU website
  • Campo
  • UnivIS
  • Jobs
  • Map
  • Help

Pattern Recognition Lab

Navigation Navigation close
  • Overview
    • Contact
    • Directions
    Portal Overview
  • Team
    • Former PRL members
    Portal Team
  • Research
    • Research Groups
    • Research Projects
    • Pattern Recognition Blog
    • Beyond the Patterns
    • Publications
    • Research Demo Videos
    • Datasets
    • Competitions
    Portal Research
  • Teaching
    • Curriculum / Courses
    • Lecture Notes
    • Lecture Videos
    • Thesis / Projects
    • Free Machine and Deep Learning Resources
    • Free Medical Engineering Resources
    • LME Videos
    Portal Teaching
  • Lab
    • News
    • Ph.D. Gallery
    • Cooperations
    • Join the Pattern Recognition Lab
    Portal Lab
  1. Home
  2. Research
  3. Research Groups
  4. Speech Processing and Understanding
  5. Modelling the progression of neurological diseases

Modelling the progression of neurological diseases

In page navigation: Research
  • Beyond the Patterns
  • Competitions
  • Publications
  • Datasets

Modelling the progression of neurological diseases

Modelling the progression of neurological diseases

(Third Party Funds Group – Sub project)

Overall project: Training Network on Automatic Processing of PAthological Speech
Project leader: Juan Vasquez Correa, Elmar Nöth
Project members:
Start date: May 1, 2018
End date:
Acronym:
Funding source: Innovative Training Networks (ITN)
URL:

Abstract

Develop speech technology that can allow unobtrusive monitoring of many
kinds of neurological diseases. The state of a patient can degrade
slowly between medical check-ups. We want to track the state of a
patient unobtrusively without the feeling of constant supervision. At
the same time the privacy of the patient has to be respected. We will
concentrate on PD and thus on acoustic cues of changes. The algorithms
should run on a smartphone, track acoustic changes during regular phone
conversations over time and thus have to be low-resource. No speech
recognition will be used and only some analysis parameters of the
conversation are stored on the phone and transferred to the server.

Publications

Friedrich-Alexander-Universität
Erlangen-Nürnberg

Schlossplatz 4
91054 Erlangen
  • Login
  • Intranet
  • Imprint
  • Privacy
  • Accessibility
Up