-
公开(公告)号:US20210390271A1
公开(公告)日:2021-12-16
申请号:US17459111
申请日:2021-08-27
Applicant: Google LLC
Inventor: Mohammad Norouzi , Zhifeng Chen , Yonghui Wu , Michael Schuster , Quoc V. Le
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural machine translation. The method comprises obtaining a first sequence of words in a source language, generating a modified sequence of words in the source language by inserting a word boundary symbol only at the beginning of each word in the first sequence of words and not at the end of each word, dividing the modified sequence of words into wordpieces using a wordpiece model, generating, from the wordpieces, an input sequence of input tokens for a neural machine translation system; and generating an output sequence of words using the neural machine translation system based on the input sequence of input tokens.
-
公开(公告)号:US10713593B2
公开(公告)日:2020-07-14
申请号:US15394708
申请日:2016-12-29
Applicant: Google LLC
Inventor: Zhifeng Chen , Michael Schuster , Melvin Jose Johnson Premkumar , Yonghui Wu , Quoc V. Le , Maxim Krikun , Thorsten Brants
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.
-
公开(公告)号:US10679148B2
公开(公告)日:2020-06-09
申请号:US16402787
申请日:2019-05-03
Applicant: Google LLC
Inventor: Zhifeng Chen , Michael Schuster , Melvin Jose Johnson Premkumar , Yonghui Wu , Quoc V. Le , Maxim Krikun , Thorsten Brants
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model. An exemplary system applying implicit bridging for machine learning tasks, as described in this specification, trains a machine learning model to perform certain types of machine learning tasks without requiring explicit training data for the certain types of machine learning tasks to be used during training.
-
公开(公告)号:US12148444B2
公开(公告)日:2024-11-19
申请号:US17222736
申请日:2021-04-05
Applicant: Google LLC
Inventor: Yonghui Wu , Jonathan Shen , Ruoming Pang , Ron J. Weiss , Michael Schuster , Navdeep Jaitly , Zongheng Yang , Zhifeng Chen , Yu Zhang , Yuxuan Wang , Russell John Wyatt Skerry-Ryan , Ryan M. Rifkin , Ioannis Agiomyrgiannakis
Abstract: Methods, systems, and computer program products for generating, from an input character sequence, an output sequence of audio data representing the input character sequence. The output sequence of audio data includes a respective audio output sample for each of a number of time steps. One example method includes, for each of the time steps: generating a mel-frequency spectrogram for the time step by processing a representation of a respective portion of the input character sequence using a decoder neural network; generating a probability distribution over a plurality of possible audio output samples for the time step by processing the mel-frequency spectrogram for the time step using a vocoder neural network; and selecting the audio output sample for the time step from the possible audio output samples in accordance with the probability distribution.
-
公开(公告)号:US20220083746A1
公开(公告)日:2022-03-17
申请号:US17459041
申请日:2021-08-27
Applicant: Google LLC
Inventor: Zhifeng Chen , Macduff Richard Hughes , Yonghui Wu , Michael Schuster , Xu Chen , Llion Owen Jones , Niki J. Parmar , George Foster , Orhan Firat , Ankur Bapna , Wolfgang Macherey , Melvin Jose Johnson Premkumar
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
-
公开(公告)号:US20210295858A1
公开(公告)日:2021-09-23
申请号:US17222736
申请日:2021-04-05
Applicant: Google LLC
Inventor: Yonghui Wu , Jonathan Shen , Ruoming Pang , Ron J. Weiss , Michael Schuster , Navdeep Jaitly , Zongheng Yang , Zhifeng Chen , Yu Zhang , Yuxuan Wang , Russell John Wyatt Skerry-Ryan , Ryan M. Rifkin , Ioannis Agiomyrgiannakis
Abstract: Methods, systems, and computer program products for generating, from an input character sequence, an output sequence of audio data representing the input character sequence. The output sequence of audio data includes a respective audio output sample for each of a number of time steps. One example method includes, for each of the time steps: generating a mel-frequency spectrogram for the time step by processing a representation of a respective portion of the input character sequence using a decoder neural network; generating a probability distribution over a plurality of possible audio output samples for the time step by processing the mel-frequency spectrogram for the time step using a vocoder neural network; and selecting the audio output sample for the time step from the possible audio output samples in accordance with the probability distribution.
-
公开(公告)号:US20190325308A1
公开(公告)日:2019-10-24
申请号:US16458506
申请日:2019-07-01
Applicant: Google LLC
Inventor: Junyoung Chung , Melvin Jose Johnson Premkumar , Michael Schuster , Wolfgang Macherey
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing multi-task learning. In one method a system obtains a respective set of training data for each of multiple machine learning tasks. For each of the machine learning tasks, the system configures a respective teacher machine learning model to perform the machine learning task by training the teacher machine learning model on the training data. The system trains a single student machine learning model to perform the multiple machine learning tasks using (i) the configured teacher machine learning models, and (ii) the obtained training data.
-
公开(公告)号:US20190188566A1
公开(公告)日:2019-06-20
申请号:US16328207
申请日:2017-08-25
Applicant: GOOGLE LLC
Inventor: Michael Schuster , Samuel Bengio , Navdeep Jaitly , Zhifeng Chen , Dale Eric Schuurmans , Mohammad Norouzi , Yonghui Wu
Abstract: A method includes obtaining data identifying a machine learning model to be trained to perform a machine learning task, the machine learning model being configured to receive an input example and to process the input example in accordance with current values of a plurality of model parameters to generate a model output for the input example; obtaining initial training data for training the machine learning model, the initial training data comprising a plurality of training examples and, for each training example, a ground truth output that should be generated by the machine learning model by processing the training example; generating modified training data from the initial training data; and training the machine learning model on the modified training data.
-
公开(公告)号:US20250021889A1
公开(公告)日:2025-01-16
申请号:US18897967
申请日:2024-09-26
Applicant: Google LLC
Inventor: Zhifeng Chen , Michael Schuster , Melvin Jose Johnson Premkumar , Yonghui Wu , Quoc V. Le , Maxim Krikun , Thorsten Brants
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.
-
公开(公告)号:US11138392B2
公开(公告)日:2021-10-05
申请号:US16521780
申请日:2019-07-25
Applicant: Google LLC
Inventor: Zhifeng Chen , Macduff Richard Hughes , Yonghui Wu , Michael Schuster , Xu Chen , Llion Owen Jones , Niki J. Parmar , George Foster , Orhan Firat , Ankur Bapna , Wolfgang Macherey , Melvin Jose Johnson Premkumar
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for machine translation using neural networks. In some implementations, a text in one language is translated into a second language using a neural network model. The model can include an encoder neural network comprising a plurality of bidirectional recurrent neural network layers. The encoding vectors are processed using a multi-headed attention module configured to generate multiple attention context vectors for each encoding vector. A decoder neural network generates a sequence of decoder output vectors using the attention context vectors. The decoder output vectors can represent distributions over various language elements of the second language, allowing a translation of the text into the second language to be determined based on the sequence of decoder output vectors.
-
-
-
-
-
-
-
-
-