-
公开(公告)号:US11586920B2
公开(公告)日:2023-02-21
申请号:US16915161
申请日:2020-06-29
Applicant: Google LLC
Inventor: Jonathan Ross , Norman Paul Jouppi , Andrew Everett Phelps , Reginald Clifford Young , Thomas Norrie , Gregory Michael Thorson , Dan Luu
Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
-
公开(公告)号:US20230010897A1
公开(公告)日:2023-01-12
申请号:US17368374
申请日:2021-07-06
Applicant: Google LLC
Inventor: Reginald Clifford Young , Trevor John Gale
IPC: G06F17/16
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for loading a matrix into a circuit having an array having M×N cells. One of the methods includes: receiving a plurality of non-zero input values from a first input matrix; receiving index metadata that indicates, for each non-zero input value in the plurality of input values, which cell of the M×N cells in the array the non-zero input value should be loaded into; sending the non-zero input values and the index metadata to the M×N cells; and at a particular cell of the M×N cells in the array: receiving a particular non-zero input value and corresponding index metadata; and determining from the corresponding index metadata for the particular non-zero input value whether to store the particular non-zero input value at the cell or to shift the particular non-zero input value to another cell.
-
公开(公告)号:US10909447B2
公开(公告)日:2021-02-02
申请号:US15455024
申请日:2017-03-09
Applicant: Google LLC
Inventor: Reginald Clifford Young , Geoffrey Irving
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium. In one aspect, a method includes the actions of receiving a request to perform computations for a neural network on a hardware circuit having a matrix computation unit, the request specifying a transpose operation to be performed on a first neural network matrix; and generating instructions that when executed by the hardware circuit cause the hardware circuit to transpose the first neural network matrix by performing first operations, wherein the first operations include repeatedly performing the following second operations: for a current subdivision of the first neural network matrix that divides the first neural network matrix into one or more current submatrices, updating the first neural network matrix by swapping an upper right quadrant and a lower left quadrant of each current submatrix, and subdividing each current submatrix into respective new submatrices to update the current subdivision.
-
公开(公告)号:US10810483B2
公开(公告)日:2020-10-20
申请号:US16717341
申请日:2019-12-17
Applicant: Google LLC
Inventor: Reginald Clifford Young , Jonathan Ross
Abstract: Methods, systems, and apparatus for efficiently performing a computation of a convolutional neural network layer. One of the methods includes transforming a X by Y by Z input tensor into a X′ by Y′ by Z′ input tensor; obtaining one or more modified weight matrices, wherein the modified weight matrices operate on the X′ by Y′ by Z′ input tensor to generate a U′ by V′ by W′ output tensor, and the U′ by V′ by W′ output tensor comprises a transformed U by V by W output tensor; and processing the X′ by Y′ by Z′ input tensor using the modified weight matrices to generate the U′ by V′ by W′ output tensor, wherein the U′ by V′ by W′ output tensor comprises the U by V by W output tensor.
-
公开(公告)号:US10699188B2
公开(公告)日:2020-06-30
申请号:US15686615
申请日:2017-08-25
Applicant: Google LLC
Inventor: Jonathan Ross , Norman Paul Jouppi , Andrew Everett Phelps , Reginald Clifford Young , Thomas Norrie , Gregory Michael Thorson , Dan Luu
Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of neural network layers, the circuit comprising: a matrix computation unit configured to, for each of the plurality of neural network layers: receive a plurality of weight inputs and a plurality of activation inputs for the neural network layer, and generate a plurality of accumulated values based on the plurality of weight inputs and the plurality of activation inputs; and a vector computation unit communicatively coupled to the matrix computation unit and configured to, for each of the plurality of neural network layers: apply an activation function to each accumulated value generated by the matrix computation unit to generate a plurality of activated values for the neural network layer.
-
公开(公告)号:US20180307970A1
公开(公告)日:2018-10-25
申请号:US16017708
申请日:2018-06-25
Applicant: Google LLC
Inventor: Reginald Clifford Young
CPC classification number: G06N3/0454 , G06N3/04 , G06N3/063 , G06N7/005
Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium for processing a network input through a neural network having one or more initial neural network layers followed by a softmax output layer. In one aspect, the methods include obtaining a layer output generated by the one or more initial neural network layers and processing the layer output through the softmax output layer to generate a neural network output. Processing the layer output through the softmax output layer includes determining, for each possible output value, a number of occurrences in the layer output values; for each possible output value occurring in the layer output values, determining a respective exponentiation measure; determining a normalization factor for the layer output by combining the exponentiation measures in accordance with the number of occurrences of the possible output values; and determining, for each of layer output values, a softmax probability value.
-
公开(公告)号:US10083395B2
公开(公告)日:2018-09-25
申请号:US14844431
申请日:2015-09-03
Applicant: Google LLC
Inventor: Reginald Clifford Young
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a respective neural network output for each of a plurality of inputs, the method comprising, for each of the neural network layers: receiving a plurality of inputs to be processed at the neural network layer; forming one or more batches of inputs from the plurality of inputs, each batch having a number of inputs up to the respective batch size for the neural network layer; selecting a number of the one or more batches of inputs to process, where a count of the inputs in the number of the one or more batches is greater than or equal to the respective associated batch size of a subsequent layer in the sequence; and processing the number of the one or more batches of inputs to generate the respective neural network layer output.
-
公开(公告)号:US11704547B2
公开(公告)日:2023-07-18
申请号:US17162745
申请日:2021-01-29
Applicant: Google LLC
Inventor: Reginald Clifford Young , Geoffrey Irving
CPC classification number: G06N3/063 , G06F7/78 , G06F17/16 , G06N3/04 , G06F2207/4824 , G06N3/045 , G06N3/084
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium. In one aspect, a method includes the actions of receiving a request to perform computations for a neural network on a hardware circuit having a matrix computation unit, the request specifying a transpose operation to be performed on a first neural network matrix; and generating instructions that when executed by the hardware circuit cause the hardware circuit to transpose the first neural network matrix by performing first operations, wherein the first operations include repeatedly performing the following second operations: for a current subdivision of the first neural network matrix that divides the first neural network matrix into one or more current submatrices, updating the first neural network matrix by swapping an upper right quadrant and a lower left quadrant of each current submatrix, and subdividing each current submatrix into respective new submatrices to update the current subdivision.
-
公开(公告)号:US10896367B2
公开(公告)日:2021-01-19
申请号:US15624629
申请日:2017-06-15
Applicant: Google LLC
Inventor: William John Gulland , Reginald Clifford Young
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for depth concatenation using a matrix computation unit. One of the methods includes: receiving a request to process network inputs to a neural network using an integrated circuit, the neural network comprising a depth concatenation neural network layer; and generating instructions that, when executed by the integrated circuit, cause the integrated circuit to perform operations comprising: for each spatial location in a first input tensor to the depth concatenation layer and a second input tensor to the depth concatenation layer: multiplying, using the matrix computation unit, a second depth vector for the spatial location by a shift weight matrix for the depth concatenation layer to generate a shifted second depth vector; and adding the shifted second depth vector and a first input depth vector for the spatial location to generate a concatenated depth vector.
-
公开(公告)号:US20200334536A1
公开(公告)日:2020-10-22
申请号:US16921436
申请日:2020-07-06
Applicant: Google LLC
Inventor: Reginald Clifford Young , William John Gulland
Abstract: Methods for receiving a request to process, on a hardware circuit, a neural network comprising a first convolutional neural network layer having a stride greater than one, and in response, generating instructions that cause the hardware circuit to, during processing of an input tensor, generate a layer output tensor equivalent to an output of the first convolutional neural network layer by processing the input tensor using a second convolutional neural network layer having a stride equal to one but that is otherwise equivalent to the first convolutional neural network layer to generate a first tensor, zeroing out elements of the first tensor that would not have been generated if the second convolutional neural network layer had the stride of the first convolutional neural network layer to generate a second tensor, and performing max pooling on the second tensor to generate the layer output tensor.
-
-
-
-
-
-
-
-
-