Advancing the Use of Text and Speech in ASR Pretraining With Consistency and Contrastive Losses

    公开(公告)号:US20230013587A1

    公开(公告)日:2023-01-19

    申请号:US17722264

    申请日:2022-04-15

    Applicant: Google LLC

    Abstract: A method includes receiving training data that includes unspoken text utterances, un-transcribed non-synthetic speech utterances, and transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. Each transcribed non-synthetic speech utterance is paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data using a text-to-speech model. The method also includes pre-training an audio encoder on the synthetic speech representations generated for the unspoken textual utterances, the un-transcribed non-synthetic speech utterances, and the transcribed non-synthetic speech utterances to teach the audio encoder to jointly learn shared speech and text representations.

    Phonemes And Graphemes for Neural Text-to-Speech

    公开(公告)号:US20220310059A1

    公开(公告)日:2022-09-29

    申请号:US17643684

    申请日:2021-12-10

    Applicant: Google LLC

    Abstract: A method includes receiving a text input including a sequence of words represented as an input encoder embedding. The input encoder embedding includes a plurality of tokens, with the plurality of tokens including a first set of grapheme tokens representing the text input as respective graphemes and a second set of phoneme tokens representing the text input as respective phonemes. The method also includes, for each respective phoneme token of the second set of phoneme tokens: identifying a respective word of the sequence of words corresponding to the respective phoneme token and determining a respective grapheme token representing the respective word of the sequence of words corresponding to the respective phoneme token. The method also includes generating an output encoder embedding based on a relationship between each respective phoneme token and the corresponding grapheme token determined to represent a same respective word as the respective phoneme token.

    Speech recognition with sequence-to-sequence models

    公开(公告)号:US11335333B2

    公开(公告)日:2022-05-17

    申请号:US16717746

    申请日:2019-12-17

    Applicant: Google LLC

    Abstract: A method includes obtaining audio data for a long-form utterance and segmenting the audio data for the long-form utterance into a plurality of overlapping segments. The method also includes, for each overlapping segment of the plurality of overlapping segments: providing features indicative of acoustic characteristics of the long-form utterance represented by the corresponding overlapping segment as input to an encoder neural network; processing an output of the encoder neural network using an attender neural network to generate a context vector; and generating word elements using the context vector and a decoder neural network. The method also includes generating a transcription for the long-form utterance by merging the word elements from the plurality of overlapping segments and providing the transcription as an output of the automated speech recognition system.

    TEXT-TO-SPEECH USING DURATION PREDICTION

    公开(公告)号:US20220108680A1

    公开(公告)日:2022-04-07

    申请号:US17492543

    申请日:2021-10-01

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, synthesizing audio data from text data using duration prediction. One of the methods includes processing an input text sequence that includes a respective text element at each of multiple input time steps using a first neural network to generate a modified input sequence comprising, for each input time step, a representation of the corresponding text element in the input text sequence; processing the modified input sequence using a second neural network to generate, for each input time step, a predicted duration of the corresponding text element in the output audio sequence; upsampling the modified input sequence according to the predicted durations to generate an intermediate sequence comprising a respective intermediate element at each of a plurality of intermediate time steps; and generating an output audio sequence using the intermediate sequence.

    Processing text sequences using neural networks

    公开(公告)号:US11182566B2

    公开(公告)日:2021-11-23

    申请号:US16338174

    申请日:2017-10-03

    Applicant: Google LLC

    Abstract: A computer-implemented method for training a neural network that is configured to generate a score distribution over a set of multiple output positions. The neural network is configured to process a network input to generate a respective score distribution for each of a plurality of output positions including a respective score for each token in a predetermined set of tokens that includes n-grams of multiple different sizes. Example methods described herein provide trained neural networks which produce results with improved accuracy compared to the state of the art, e.g. translations that are more accurate compared to the state of the art, or more accurate speech recognition compared to the state of the art.

    MULTILINGUAL SPEECH SYNTHESIS AND CROSS-LANGUAGE VOICE CLONING

    公开(公告)号:US20200380952A1

    公开(公告)日:2020-12-03

    申请号:US16855042

    申请日:2020-04-22

    Applicant: Google LLC

    Abstract: A method includes receiving an input text sequence to be synthesized into speech in a first language and obtaining a speaker embedding, the speaker embedding specifying specific voice characteristics of a target speaker for synthesizing the input text sequence into speech that clones a voice of the target speaker. The target speaker includes a native speaker of a second language different than the first language. The method also includes generating, using a text-to-speech (TTS) model, an output audio feature representation of the input text by processing the input text sequence and the speaker embedding. The output audio feature representation includes the voice characteristics of the target speaker specified by the speaker embedding.

    Dynamic Adjustment of Delivery Location Based on User Location

    公开(公告)号:US20200265383A1

    公开(公告)日:2020-08-20

    申请号:US16865970

    申请日:2020-05-04

    Applicant: Google LLC

    Inventor: Yu Zhang

    Abstract: A user places an order on a merchant website associated with a merchant system via a user computing device. The user selects an option for delivery to the user computing device location within a delivery area during a delivery time window and authorizes a delivery system to log the location of the user computing device during and/or a period of time before the delivery time window. When the delivery time window arrives, the delivery system provides a delivery route to a delivery agent computing device. When the delivery agent arrives at the user computing device's location, the user receives an alert that the delivery agent has arrived and receives a package from the delivery agent. If the user does not remain within the delivery area, the user may cancel the order and the delivery, may reschedule the delivery, and/or may accept delivery of the order to a fixed shipping address.

    VERY DEEP CONVOLUTIONAL NEURAL NETWORKS FOR END-TO-END SPEECH RECOGNITION

    公开(公告)号:US20200090044A1

    公开(公告)日:2020-03-19

    申请号:US16692538

    申请日:2019-11-22

    Applicant: Google LLC

    Abstract: A speech recognition neural network system includes an encoder neural network and a decoder neural network. The encoder neural network generates an encoded sequence from an input acoustic sequence that represents an utterance. The input acoustic sequence includes a respective acoustic feature representation at each of a plurality of input time steps, the encoded sequence includes a respective encoded representation at each of a plurality of time reduced time steps, and the number of time reduced time steps is less than the number of input time steps. The encoder neural network includes a time reduction subnetwork, a convolutional LSTM subnetwork, and a network in network subnetwork. The decoder neural network receives the encoded sequence and processes the encoded sequence to generate, for each position in an output sequence order, a set of sub string scores that includes a respective sub string score for each substring in a set of substrings.

Patent Agency Ranking