Systems and methods for efficient scaling of quantized integers

    公开(公告)号:US11023240B1

    公开(公告)日:2021-06-01

    申请号:US16692899

    申请日:2019-11-22

    Applicant: Facebook, Inc.

    Abstract: The disclosed computer-implemented method may include receiving an input value and a floating-point scaling factor and determining (1) an integer scaling factor based on the floating-point scaling factor, (2) a pre-scaling adjustment value representative of a number of places by which to shift a binary representation of the input value prior to a scaling operation, and (3) a post-scaling adjustment value representative of a number of places by which to shift the binary representation of the input value following the scaling operation. The method may further include calculating a scaled result value by (1) shifting rightwards the binary representation of the input value by the pre-scaling adjustment value, (2) scaling the shifted binary representation of the input value by the integer scaling factor, and (3) shifting rightwards the shifted and scaled binary value by the post-scaling adjustment value. Various other methods, systems, and computer-readable media are also disclosed.

    Systems and methods for efficiently updating neural networks

    公开(公告)号:US10699190B1

    公开(公告)日:2020-06-30

    申请号:US15911120

    申请日:2018-03-04

    Applicant: Facebook, Inc.

    Abstract: The disclosed computer-implemented method for efficiently updating neural networks may include (i) identifying a neural network that comprises sets of interconnected nodes represented at least in part by a plurality of matrices and that is trained on a training computing device and executes on at least one endpoint device, (ii) constraining a training session for the neural network to reduce the size in memory of the difference between the previous values of the matrices prior to the training session and the new values of the matrices after the training session, (iii) creating a delta update for the neural network that describes the difference between the previous values and the new values, and (iv) updating the neural network on the endpoint device to the new state by sending the delta update from the training computing device to the endpoint computing device. Various other methods, systems, and computer-readable media are also disclosed.

    Systems and methods for efficient scaling of quantized integers

    公开(公告)号:US10579383B1

    公开(公告)日:2020-03-03

    申请号:US15992793

    申请日:2018-05-30

    Applicant: Facebook, Inc.

    Abstract: The disclosed computer-implemented method may include receiving an input value and a floating-point scaling factor and determining (1) an integer scaling factor based on the floating-point scaling factor, (2) a pre-scaling adjustment value representative of a number of places by which to shift a binary representation of the input value prior to a scaling operation, and (3) a post-scaling adjustment value representative of a number of places by which to shift the binary representation of the input value following the scaling operation. The method may further include calculating a scaled result value by (1) shifting rightwards the binary representation of the input value by the pre-scaling adjustment value, (2) scaling the shifted binary representation of the input value by the integer scaling factor, and (3) shifting rightwards the shifted and scaled binary value by the post-scaling adjustment value. Various other methods, systems, and computer-readable media are also disclosed.

    Systems and methods for employing predication in computational models

    公开(公告)号:US10553207B2

    公开(公告)日:2020-02-04

    申请号:US15857990

    申请日:2017-12-29

    Applicant: Facebook, Inc.

    Abstract: The disclosed method may include (1) determining whether a next operation of a plurality of operations of a computational model is dependent upon a Boolean predication value, (2) based on the next operation not being dependent on the Boolean predication value, performing the next operation, where a state of the computational model is updated as a result of performing the next operation, and (3) based on the next operation being dependent on the Boolean predication value, performing at least one of (a) allowing, based on the Boolean predication value being a first value, the next operation to update the state of the computational model, and (b) preventing, based on the Boolean predication value being a second value different from the first value, the next operation from updating the state of the computational model. Various other methods and systems are also disclosed.

    Sparsity-aware hardware accelerators

    公开(公告)号:US10482156B2

    公开(公告)日:2019-11-19

    申请号:US15857918

    申请日:2017-12-29

    Applicant: Facebook, Inc.

    Abstract: A special-purpose, hardware-based accelerator may include an input subsystem configured to receive first and second vectors as operands of a full dot-product operation. The accelerator may also include a sparsity-aware dot-product engine communicatively coupled to the input subsystem and configured to perform adaptive dot-product processing by: (1) identifying, within the first and second vectors, at least one zero-value element and (2) executing, in response to identifying the zero-value element, a reduced dot-product operation that excludes, relative to the full dot-product operation, at least one mathematical operation in which the zero-value element is an operand. The accelerator may also include an output subsystem that is communicatively coupled to the sparsity-aware dot-product engine and configured to send a result of the reduced dot-product operation to a storage subsystem. Various other accelerators, computing systems, and methods are also disclosed.

    DYNAMIC POWER MANAGEMENT FOR ARTIFICIAL INTELLIGENCE HARDWARE ACCELERATORS

    公开(公告)号:US20190187775A1

    公开(公告)日:2019-06-20

    申请号:US15846117

    申请日:2017-12-18

    Applicant: Facebook, Inc.

    Abstract: A computer-implemented method for dynamically managing the power usage and/or performance of an artificial intelligence (AI) hardware accelerator may include (1) receiving an instruction stream that includes one or more instructions for performing at least one AI-specific computing task, (2) identifying a plurality of special-purpose, hardware-based functional units configured to perform AI-specific computing tasks, (3) predicting, based on an analysis of at least a portion of the instruction stream, a power-usage requirement for at least one of the functional units when executing the instruction stream, and then (4) modifying, based on the power-usage requirement, the power supplied to at least one of the functional units. Various other methods and systems are also disclosed.

    HARDWARE ACCELERATOR PRE-CONFIGURED WITH COEFFICIENTS FOR MATRIX-TRANSFORM OPERATIONS

    公开(公告)号:US20190179869A1

    公开(公告)日:2019-06-13

    申请号:US15839229

    申请日:2017-12-12

    Applicant: Facebook, Inc.

    Abstract: A special-purpose hardware accelerator may include a cache configured to store an input matrix related to performing a convolution operation and a matrix-multiplication subsystem pre-configured with matrix-transform coefficients for performing matrix-transform operations. The matrix-multiplication subsystem may perform the convolution operation by (1) reading the input matrix from the cache, (2) transforming the input matrix via matrix multiplication, (3) transforming, via matrix multiplication, a parameter matrix that includes convolution parameters for performing the convolution operation, (4) applying the transformed parameter matrix to the transformed input matrix via an element-wise multiplication operation, and then (5) performing an inverse-transformation operation on the results of the element-wise multiplication operation to create an output matrix for the convolution operation. Various other systems and methods are also disclosed.

Patent Agency Ranking