Abstract:
Techniques are described herein that suppress noise using multiple sensors (e.g., microphones) of a communication device. Noise modeling (e.g., estimation of noise basis vectors and noise weighting vectors) is performed with respect to a noise signal during operation of a communication device to provide a noise model. The noise model includes noise basis vectors and noise coefficients that represent noise provided by audio sources other than a user of the communication device. Speech modeling (e.g., estimation of speech basis vectors and speech weighting) is performed to provide a speech model. The speech model includes speech basis vectors and speech coefficients that represent speech of the user. A noisy speech signal is processed using the noise basis vectors, the noise coefficients, the speech basis vectors, and the speech coefficients to provide a clean speech signal.
Abstract:
Multi-channel noise suppression systems and methods are described that omit the traditional delay-and-sum fixed beamformer in devices that include a primary speech microphone and at least one noise reference microphone with the desired speech being in the near-field of the device. The multi-channel noise suppression systems and methods use a blocking matrix (BM) to remove desired speech in the input speech signal received by the noise reference microphone to get a “cleaner” background noise component. Then, an adaptive noise canceler (ANC) is used to remove the background noise in the input speech signal received by the primary speech microphone based on the “cleaner” background noise component to achieve noise suppression. The filters implemented by the BM and ANC are derived using closed-form solutions that require calculation of time-varying statistics of complex frequency domain signals in the noise suppression system.
Abstract:
A technique is described herein for reducing audible artifacts in an audio output signal generated by decoding a received frame in a series of frames representing an encoded audio signal in a predictive coding system. In accordance with the technique, it is determined if the received frame is one of a predefined number of received frames that follow a lost frame in the series of the frames. Responsive to determining that the received frame is one of the predefined number of received frames, at least one parameter or signal associated with the decoding of the received frame is altered from a state associated with normal decoding. The received frame is then decoded in accordance with the at least one parameter or signal to generate a decoded audio signal. The audio output signal is then generated based on the decoded audio signal.
Abstract:
A technique is described for concealing the effect of a lost frame in a series of frames representing an encoded audio signal in a sub-band predictive coding system. In accordance with the technique, a first synthesized sub-band audio signal is synthesized, wherein synthesizing the first synthesized sub-band audio signal comprises performing waveform extrapolation based on a stored first sub-band decoded audio signal. A second synthesized sub-band audio signal is also synthesized, wherein synthesizing the second synthesized sub-band audio signal comprises performing waveform extrapolation based on the stored second sub-band decoded audio signal. The first synthesized sub-band audio signal and the second synthesized sub-band audio signal are combined to generate a synthesized full-band output audio signal corresponding to a lost frame.
Abstract:
A system and method for performing speaker localization is described. The system and method utilizes speaker recognition to provide an estimate of the direction of arrival (DOA) of speech sound waves emanating from a desired speaker with respect to a microphone array included in the system. Candidate DOA estimates may be preselected or generated by one or more other DOA estimation techniques. The system and method is suited to support steerable beamforming as well as other applications that utilize or benefit from DOA estimation. The system and method provides robust performance even in systems and devices having small microphone arrays and thus may advantageously be implemented to steer a beamformer in a cellular telephone or other mobile telephony terminal featuring a speakerphone mode.
Abstract:
Typical communication systems operate with a single channel decoder, and hence would have to settle for the performance from the single channel decoder regardless of the conditions of the communications channel. The present invention uses a hybrid channel decoder comprising multiple channel decoders, each configured to optimize the quality of the re-constructed signal for different channel conditions. Therefore, the desired decoder can be selected as conditions of the communications channel, or the data signal, change over time, so as to optimize the re-constructed data signal. In embodiments, the data signal is a speech signal.
Abstract:
A speech intelligibility enhancement (SIE) system and method is described that improves the intelligibility of a speech signal to be played back by an audio device when the audio device is located in an environment with loud acoustic background noise. In an embodiment, the audio device comprises a near-end telephony terminal and the speech signal comprises a speech signal received over a communication network from a far-end telephony terminal for playback at the near-end telephony terminal.
Abstract:
Systems and methods are described for performing packet loss concealment using an extrapolation of an excitation waveform in a sub-band predictive speech coder, such as an ITU-T Recommendation G.722 wideband speech coder. The systems and methods are useful for concealing the quality-degrading effects of packet loss in a sub-band predictive coder and address some sub-band architectural issues when applying excitation extrapolation techniques to such sub-band predictive coders.
Abstract:
A technique is described herein for reducing audible artifacts in an audio output signal generated by decoding a received frame in a series of frames representing an encoded audio signal in a predictive coding system. In accordance with the technique, it is determined if the received frame is one of a predefined number of received frames that follow a lost frame in the series of the frames. Responsive to determining that the received frame is one of the predefined number of received frames, at least one parameter or signal associated with the decoding of the received frame is altered from a state associated with normal decoding. The received frame is then decoded in accordance with the at least one parameter or signal to generate a decoded audio signal. The audio output signal is then generated based on the decoded audio signal.
Abstract:
A filter controller processes a decoded speech (DS) signal. The DS signal has a spectral envelope including a first plurality of formant peaks having different respective amplitudes. The controller produces, from the DS signal, a spectrally-flattened DS signal that is a time-domain signal. The spectrally-flattened time-domain DS signal has a spectral envelope including a second plurality of formant peaks. Each of the second plurality of formant peaks approximately coincides in frequency with a respective one of the first plurality of formant peaks. Also, the second plurality of formant peaks have approximately equal respective amplitudes. Next, the controller derives, from the spectrally-flattened time-domain DS signal, a set of filter coefficients representative of a filter response that is to be used to filter the DS signal.