Microphone array based deep learning for time-domain speech signal extraction

    公开(公告)号:US11508388B1

    公开(公告)日:2022-11-22

    申请号:US17100802

    申请日:2020-11-20

    Applicant: Apple Inc.

    Abstract: A device for processing audio signals in a time-domain includes a processor configured to receive multiple audio signals corresponding to respective microphones of at least two or more microphones of the device, at least one of the multiple audio signals comprising speech of a user of the device. The processor is configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on an expected position of the user of the device and expected positions of the respective microphones on the device. The processor is configured to provide an audio signal that is enhanced with respect to the speech of the user relative to the multiple audio signals, wherein the audio signal is a waveform output from the machine learning model.

    End-To-End Time-Domain Multitask Learning for ML-Based Speech Enhancement

    公开(公告)号:US20220366927A1

    公开(公告)日:2022-11-17

    申请号:US17321411

    申请日:2021-05-15

    Applicant: Apple Inc.

    Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.

    DEEP LEARNING DRIVEN MULTI-CHANNEL FILTERING FOR SPEECH ENHANCEMENT

    公开(公告)号:US20190172476A1

    公开(公告)日:2019-06-06

    申请号:US15830955

    申请日:2017-12-04

    Applicant: Apple Inc.

    Abstract: A number of features are extracted from a current frame of a multi-channel speech pickup and from side information that is a linear echo estimate, a diffuse signal component, or a noise estimate of the multi-channel speech pickup. A DNN-based speech presence probability is produced for the current frame, where the SPP value is produced in response to the extracted features being input to the DNN. The DNN-based SPP value is applied to configure a multi-channel filter whose input is the multi-channel speech pickup and whose output is a single audio signal. In one aspect, the system is designed to run online, at low enough latency for real time applications such voice trigger detection. Other aspects are also described and claimed.

    Hybrid learning-based and statistical processing techniques for voice activity detection

    公开(公告)号:US11341988B1

    公开(公告)日:2022-05-24

    申请号:US16578802

    申请日:2019-09-23

    Applicant: Apple Inc.

    Abstract: A hybrid machine learning-based and DSP statistical post-processing technique is disclosed for voice activity detection. The hybrid technique may use a DNN model with a small context window to estimate the probability of speech by frames. The DSP statistical post-processing stage operates on the frame-based speech probabilities from the DNN model to smooth the probabilities and to reduce transitions between speech and non-speech states. The hybrid technique may estimate the soft decision on detected speech in each frame based on the smoothed probabilities, generate a hard decision using a threshold, detect a complete utterance that may include brief pauses, and estimate the end point of the utterance. The hybrid voice activity detection technique may incorporate a target directional probability estimator to estimate the direction of the speech source. The DSP statistical post-processing module may use the direction of the speech source to inform the estimates of the voice activity.

    System and method for performing speech enhancement using a deep neural network-based signal

    公开(公告)号:US10074380B2

    公开(公告)日:2018-09-11

    申请号:US15227885

    申请日:2016-08-03

    Applicant: Apple Inc.

    CPC classification number: G10L21/0232 G10L25/30 G10L25/87 G10L2021/02082

    Abstract: Method for performing speech enhancement using a Deep Neural Network (DNN)-based signal starts with training DNN offline by exciting a microphone using target training signal that includes signal approximation of clean speech. Loudspeaker is driven with a reference signal and outputs loudspeaker signal. Microphone then generates microphone signal based on at least one of: near-end speaker signal, ambient noise signal, or loudspeaker signal. Acoustic-echo-canceller (AEC) generates AEC echo-cancelled signal based on reference signal and microphone signal. Loudspeaker signal estimator generates estimated loudspeaker signal based on microphone signal and AEC echo-cancelled signal. DNN receives microphone signal, reference signal, AEC echo-cancelled signal, and estimated loudspeaker signal and generates a speech reference signal that includes signal statistics for residual echo or for noise. Noise suppressor generates a clean speech signal by suppressing noise or residual echo in the microphone signal based on speech reference signal. Other embodiments are described.

    End-to-end time-domain multitask learning for ML-based speech enhancement

    公开(公告)号:US11996114B2

    公开(公告)日:2024-05-28

    申请号:US17321411

    申请日:2021-05-15

    Applicant: Apple Inc.

    CPC classification number: G10L21/0216 G06N20/00 G10L15/16 G10L2021/02166

    Abstract: Disclosed is a multi-task machine learning model such as a time-domain deep neural network (DNN) that jointly generate an enhanced target speech signal and target audio parameters from a mixed signal of target speech and interference signal. The DNN may encode the mixed signal, determine masks used to jointly estimate the target signal and the target audio parameters based on the encoded mixed signal, apply the mask to separate the target speech from the interference signal to jointly estimate the target signal and the target audio parameters, and decode the masked features to enhance the target speech signal and to estimate the target audio parameters. The target audio parameters may include a voice activity detection (VAD) flag of the target speech. The DNN may leverage multi-channel audio signal and multi-modal signals such as video signals of the target speaker to improve the robustness of the enhanced target speech signal.

Patent Agency Ranking