TRAINING GIANT NEURAL NETWORKS USING PIPELINE PARALLELISM

    公开(公告)号:US20220121945A1

    公开(公告)日:2022-04-21

    申请号:US17567740

    申请日:2022-01-03

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.

    CONVOLUTIONAL NEURAL NETWORKS WITH SOFT KERNEL SELECTION

    公开(公告)号:US20220129740A1

    公开(公告)日:2022-04-28

    申请号:US17425283

    申请日:2020-01-23

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using neural networks that include one or more conditional convolutional layers. A conditional convolutional layer has a plurality of kernels and determines a respective input-dependent weight for each of the plurality of kernels and generates an input-dependent kernel by computing a weighted sum of the plurality of kernels in accordance with the respective input-dependent weights.

    TRAINING GIANT NEURAL NETWORKS USING PIPELINE PARALLELISM

    公开(公告)号:US20210042620A1

    公开(公告)日:2021-02-11

    申请号:US16989787

    申请日:2020-08-10

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.

    Training giant neural networks using pipeline parallelism

    公开(公告)号:US11232356B2

    公开(公告)日:2022-01-25

    申请号:US16989787

    申请日:2020-08-10

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.

Patent Agency Ranking