Invited Talk: Prof. Dr. Jonghye Woo (Harvard University) – AI in Medical Imaging-based Diagnosis and Treatment of Speech Disorders, Wednesday, Nov. 9th, 2022, 5 PM CET

Symbolic picture for the article. The link opens the image in a large view.

We are very happy to announce Prof. Dr. Jonghye Woo from Harvard University as an invited speaker at our lab!

Title: AI in Medical Imaging-based Diagnosis and Treatment of Speech Disorders
Date: Wednesday, Nov. 9th, 2022, 5 PM CET
Location: https://fau.zoom.us/j/63564286962?pwd=ODBwaUF3Yjk5bFJZVWVnMmgyMlJKQT09

Short Bio: Dr. Woo is an Assistant Professor in the Department of Radiology at Harvard Medical School and Massachusetts General Hospital in Boston since 2015. He received the B.S. degree from Seoul National University, Seoul, Korea, in 2005, and the M.S. and Ph.D. degrees from the University of Southern California (USC), Los Angeles, in 2007 and 2009, respectively, all in electrical engineering. He worked at Philips Research North America in Briarcliff Manor (now in Cambridge), as a Summer Research Intern in 2009 and Cedars-Sinai Medical Center in Los Angeles, as a Research Associate in 2010. He was a Postdoctoral Fellow and later a Research Associate at the University of Maryland and Johns Hopkins University in Baltimore from 2010 to 2014. His research interests include medical image analysis and machine/deep learning for numerous clinical applications. He received numerous awards, including USC Viterbi School of Engineering Best Dissertation Award in 2010 and the NIH/NIDCD K99/R00 Pathway to Independence Award in 2013.

Abstract: The human tongue is a highly complex and deformable muscular structure responsible for speech, swallowing, or breathing. In particular, the production of intelligible speech requires spatiotemporally varying internal muscle groupings, functional units, that are formed in a highly coordinated fashion. The functional units of tongue motion during speech are altered because of a range of disorders, such as tongue cancer. In recent years, machine/deep learning is an active area of research in the speech and health domain with a focus on developing robust clinical decision support tools to objectively and reproducibly extract, measure, and interpret such disease effects for better diagnosis and treatment. In addition, machine/deep learning is transforming traditional signal analysis approaches, leading to reaching near human capabilities in a wide range of tasks, including classification, recognition, and prediction.

In this talk, Dr. Woo will first touch on state-of-the-art MR imaging and analysis techniques to capture and analyze motion of the tongue and to identify functional units during speech. Second, he will present a new deep learning method that differentiates post-cancer from healthy tongue muscle coordination patterns during speech. Finally, he will present a deep learning technique to identify common and subject-specific functional units using a deep joint sparse non-negative matrix factorization framework. Overall, this new suite of techniques can potentially offer new therapeutic or rehabilitative strategies to better manage speech-related disorders.