The human brain’s ability to focus its auditory attention on a stimulus while filtering out a range of other stimuli is a known neurological phenomenon (cocktail party problem). Reduction of unwanted environmental noises is, therefore, an important feature of today’s hearing aid devices. Most traditional noise-reduction and source-segregation schemes use limited signal properties of noisy environments, effectively only exploiting first and second order statistics. Since deep networks are able to learn a complex, nonlinear mapping function, this makes them ideal candidates for noise reduction tasks, where complex priors (speech and distortion signal properties) must be modeled.
Single-channel noise reduction aims to solve this problem making use of a single microphone only. It is however known that the signal-to-noise ratio is typically improved by making use of directional microphones, to exploit multi-channel signals.
Deep learning-based noise reduction has already been explored yielding good results on single-channel signals. We aim to support the hearing impaired in a noisy environment by improving an already existing deep-learning based noise reduction framework using multi-channel signals, which enables to exploit directional information.
To incorporate multi-channel signals in a deep-learning framework for noise reduction, we aim to use beamforming. Beamforming is a signal processing technique for directional signal transmission or reception and works by eliminating undesirable interference sources and focusing transmitted signals on a specific location.
We propose to use as data multi-channel noise signals from hearing aids. These speech signals are cleaned using some signal processing and transformed with HRTFs (head-related transfer function). Multiple channels and positional information of the microphones can be used to estimate the beamforming coefficients.