EFFICIENT STREAMING NON-RECURRENT ON-DEVICE END-TO-END MODEL

    公开(公告)号:US20230343328A1

    公开(公告)日:2023-10-26

    申请号:US18336211

    申请日:2023-06-16

    Applicant: Google LLC

    CPC classification number: G10L15/063 G10L15/02 G10L15/22 G10L15/30

    Abstract: An ASR model includes a first encoder configured to receive a sequence of acoustic frames and generate a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The ASR model also includes a second encoder configured to receive the first higher order feature representation generated by the first encoder at each of the plurality of output steps and generate a second higher order feature representation for a corresponding first higher order feature frame. The ASR model also includes a decoder configured to receive the second higher order feature representation generated by the second encoder at each of the plurality of output steps and generate a first probability distribution over possible speech recognition hypothesis. The ASR model also includes a language model configured to receive the first probability distribution over possible speech hypothesis and generate a rescored probability distribution.

    Optimizing Personal VAD for On-Device Speech Recognition

    公开(公告)号:US20230298591A1

    公开(公告)日:2023-09-21

    申请号:US18123060

    申请日:2023-03-17

    Applicant: Google LLC

    CPC classification number: G10L17/06 G10L17/22

    Abstract: A computer-implemented method includes receiving a sequence of acoustic frames corresponding to an utterance and generating a reference speaker embedding for the utterance. The method also includes receiving a target speaker embedding for a target speaker and generating feature-wise linear modulation (FiLM) parameters including a scaling vector and a shifting vector based on the target speaker embedding. The method also includes generating an affine transformation output that scales and shifts the reference speaker embedding based on the FiLM parameters. The method also includes generating a classification output indicating whether the utterance was spoken by the target speaker based on the affine transformation output.

    STFT-Based Echo Muter
    5.
    发明申请

    公开(公告)号:US20230079828A1

    公开(公告)日:2023-03-16

    申请号:US17643825

    申请日:2021-12-11

    Applicant: Google LLC

    Abstract: A method for Short-Time Fourier Transform-based echo muting includes receiving a microphone signal including acoustic echo captured by a microphone and corresponding to audio content from an acoustic speaker, and receiving a reference signal including a sequence of frames representing the audio content. For each frame in a sequence of frames, the method includes processing, using an acoustic echo canceler configured to receive a respective frame as input to generate a respective output signal frame that cancels the acoustic echo from the respective frame, and determining, using a Double-talk Detector (DTD), based on the respective frame and the respective output signal frame, whether the respective frame includes a double-talk frame or an echo-only frame. For each respective frame that includes the echo-only frame, muting the respective output signal frame, and performing speech processing on the respective output signal frame for each respective frame that includes the double-talk frame.

    Cascaded Encoders for Simplified Streaming and Non-Streaming ASR

    公开(公告)号:US20220122622A1

    公开(公告)日:2022-04-21

    申请号:US17237021

    申请日:2021-04-21

    Applicant: Google LLC

    Abstract: An automated speech recognition (ASR) model includes a first encoder, a second encoder, and a decoder. The first encoder receives, as input, a sequence of acoustic frames, and generates, at each of a plurality of output steps, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The second encoder receives, as input, the first higher order feature representation generated by the first encoder at each of the plurality of output steps, and generates, at each of the plurality of output steps, a second higher order feature representation for a corresponding first higher order feature frame. The decoder receives, as input, the second higher order feature representation generated by the second encoder at each of the plurality of output steps, and generates, at each of the plurality of time steps, a first probability distribution over possible speech recognition hypotheses.

Patent Agency Ranking