Analytic And Empirical Correction Of Biased Error Introduced By Approximation Methods

    公开(公告)号:US20200302298A1

    公开(公告)日:2020-09-24

    申请号:US16826472

    申请日:2020-03-23

    Abstract: Various embodiments include methods and neural network computing devices implementing the methods for methods for method for generating an approximation neural network correcting for errors due to approximation operations. Various embodiments may include performing approximation operations on a weights tensor associated with a layer of a neural network to generate an approximation weights tensor, determining an expected output error of the layer in the neural network due to the approximation weights tensor, subtracting the expected output error from a bias parameter of the layer to determine an adjusted bias parameter and substituting the adjusted bias parameter for the bias parameter in the layer. Such operations may be performed for all layers in a neural network to produce an approximation version of the neural network for execution on a resource limited processor.

    PER-EMBEDDING-GROUP ACTIVATION QUANTIZATION

    公开(公告)号:US20230139347A1

    公开(公告)日:2023-05-04

    申请号:US17976683

    申请日:2022-10-28

    Abstract: A processor-implemented method for providing per-embedding-group activation quantization includes receiving sequential data at a first layer of a transformer neural network. The sequential data is processed via the first layer of the transformer neural network to generate an activation tensor. The activation tensor is split into multiple groups of embeddings. Each of the embeddings groups has a different set of quantization parameters. Each of the embedding groups is quantized separately based on the corresponding quantization parameters of the different set of quantization parameters. The quantized embedding groups are multiplied with a set of weights to generate an output.

    Neural Network Pruning With Cyclical Sparsity

    公开(公告)号:US20220245457A1

    公开(公告)日:2022-08-04

    申请号:US17456318

    申请日:2021-11-23

    Abstract: Various embodiments include methods and devices for neural network pruning. Embodiments may include receiving as an input a weight tensor for a neural network, increasing a level of sparsity of the weight tensor generating a sparse weight tensor, updating the neural network using the sparse weight tensor generating an updated weight tensor, decreasing a level of sparsity of the updated weight tensor generating a dense weight tensor, increasing the level of sparsity of the dense weight tensor the dense weight tensor generating a final sparse weight tensor, and using the neural network with the final sparse weight tensor to generate inferences. Some embodiments may include increasing a level of sparsity of a first sparse weight tensor generating a second sparse weight tensor, updating the neural network using the second sparse weight tensor generating a second updated weight tensor, and decreasing the level of sparsity the second updated weight tensor.

    Systems and Methods of Cross Layer Rescaling for Improved Quantization Performance

    公开(公告)号:US20200302299A1

    公开(公告)日:2020-09-24

    申请号:US16826524

    申请日:2020-03-23

    Abstract: Various embodiments include methods and neural network computing devices implementing the methods for performing quantization in neural networks. Various embodiments may include equalizing ranges of weight tensors or output channel weights within a first layer of the neural network by scaling each of the output channel weights of the first layer by a corresponding scaling factor, and scaling each of a second adjacent layer's corresponding input channel weights by applying an inverse of the corresponding scaling factor to the input channel weights. The corresponding scaling factor may be determined using a black-box optimizer on a quantization error metric or based on heuristics, equalization of dynamic ranges, equalization of range extrema (minima or maxima), differential learning using straight through estimator (STE) methods and a local or global loss, or using an error metric for the quantization error and a black-box optimizer that minimizes the error metric with respect to the scaling.

Patent Agency Ranking