Counter based multiply-and-accumulate circuit for neural network

    公开(公告)号:US11385864B2

    公开(公告)日:2022-07-12

    申请号:US16460719

    申请日:2019-07-02

    IPC分类号: G06F7/544 G06N3/08

    摘要: Disclosed herein includes a system, a method, and a device for improving computation efficiency of a neural network. In one aspect, adder circuitry is configured to add input data from processing of the neural network and a first number of bits of accumulated data for the neural network to generate summation data. In one aspect, according to a carry value of the adding from the adder circuitry, a multiplexer is configured to select between i) a second number of bits of the accumulated data and ii) incremented data comprising the second number of bits of the accumulated data incremented by a predetermined value. The summation data appended with the selected one of the second number of bits of the accumulated data or the incremented data may form appended data.

    EFFICIENT MULTIPLY-ACCUMULATION BASED ON SPARSE MATRIX

    公开(公告)号:US20220058026A1

    公开(公告)日:2022-02-24

    申请号:US16997460

    申请日:2020-08-19

    摘要: Disclosed herein includes improving computational efficiency of multiply-accumulate (MAC) operation. In one aspect, a computing device identifies, a first vector including non-zero elements of a base matrix, and a second vector indicating a location of each of the non-zero elements of the base matrix. In one aspect, the device determines a first element and a second element of the first vector. In one aspect, the device determines a third element and a fourth element of the second vector. In one aspect, the device determines i) a fifth element of an input vector according to the third element of the second vector, and ii) a sixth element of the input vector according to the fourth element of the second vector. In one aspect, the device causes a MAC circuitry to perform a dot product according to the first element, the second element, the fifth element, and the sixth element.

    SYSTEMS AND METHODS FOR ASYMMETRICAL SCALING FACTOR SUPPORT FOR NEGATIVE AND POSITIVE VALUES

    公开(公告)号:US20210012202A1

    公开(公告)日:2021-01-14

    申请号:US16510616

    申请日:2019-07-12

    摘要: Disclosed herein includes a system, a method, and a device for asymmetrical scaling factor support for negative and positive values. A device can include a circuit having a shift circuitry and multiply circuitry. The circuit can be configured to perform computation for a neural network, including multiplying, via the multiply circuitry, a first value and a second value. The circuit can be configured to perform computation for a neural network, including shifting, via the shift circuitry, a result of the multiplying by a determined number of bits. The circuit can be configured to perform computation for a neural network, including outputting the result of the multiplying when a sign bit of the first value is negative, and a result of the shifting when the sign bit of the first value is positive.

    SYSTEM AND METHOD FOR SUPPORTING ALTERNATE NUMBER FORMAT FOR EFFICIENT MULTIPLICATION

    公开(公告)号:US20210019115A1

    公开(公告)日:2021-01-21

    申请号:US16511085

    申请日:2019-07-15

    IPC分类号: G06F7/544 G06F7/487 G06F7/57

    摘要: Disclosed herein includes a system, a method, and a device including shift circuitry and add circuitry for performing multiplication of a first value and a second value for a neural network. The first value has a predetermined format including a first bit, and two or more second bits to represent a value of zero or 2n where n is an integer greater than or equal to 0. The device shifts, when the two or more second bits represent the value of 2n, the second value by (n+1) bits via the shift circuitry to provide a first result, selectively outputs zero or the second value, based on a value of the first bit of the first value, to provide a second result, and adds the first result and the second results via the add circuitry to provide a result of the multiplication of the first and second values.

    SYSTEMS, METHODS, AND DEVICES FOR EARLY-EXIT FROM CONVOLUTION

    公开(公告)号:US20210012178A1

    公开(公告)日:2021-01-14

    申请号:US16509098

    申请日:2019-07-11

    摘要: Disclosed herein includes a system, a method, and a device for early-exit from convolution. In some embodiments, at least one processing element (PE) circuit is configured to perform, for a node of a neural network corresponding to a dot-product operation with a set of operands, computation using a subset of the set of operands to generate a dot-product value of the subset of the set of operands. The at least one PE circuit can compare the dot-product value of the subset of the set of operands, to a threshold value. The at least one PE circuit can determine whether to activate the node of the neural network, based at least on a result of the comparing.

    SYSTEMS AND METHODS FOR DISTRIBUTING A NEURAL NETWORK ACROSS MULTIPLE COMPUTING DEVICES

    公开(公告)号:US20210011288A1

    公开(公告)日:2021-01-14

    申请号:US16506479

    申请日:2019-07-09

    IPC分类号: G02B27/01 G06N3/04 H04N13/106

    摘要: Disclosed herein is a method for using a neural network across multiple devices. The method can include receiving, by a first device configured with a first one or more layers of a neural network, input data for processing via the neural network implemented across the first device and a second device. The method can include outputting, by the first one or more layers of the neural network implemented on the first device, a data set that is reduced in size relative to the input data while identifying one or more features of the input data for processing by a second one or more layers of the neural network. The method can include communicating, by the first device, the data set to the second device for processing via the second one or more layers of the neural network implemented on the second device.

    System and method for supporting alternate number format for efficient multiplication

    公开(公告)号:US10977002B2

    公开(公告)日:2021-04-13

    申请号:US16511085

    申请日:2019-07-15

    摘要: Disclosed herein includes a system, a method, and a device including shift circuitry and add circuitry for performing multiplication of a first value and a second value for a neural network. The first value has a predetermined format including a first bit, and two or more second bits to represent a value of zero or 2n where n is an integer greater than or equal to 0. The device shifts, when the two or more second bits represent the value of 2n, the second value by (n+1) bits via the shift circuitry to provide a first result, selectively outputs zero or the second value, based on a value of the first bit of the first value, to provide a second result, and adds the first result and the second results via the add circuitry to provide a result of the multiplication of the first and second values.

    SYSTEM AND METHOD FOR PERFORMING SMALL CHANNEL COUNT CONVOLUTIONS IN ENERGY-EFFICIENT INPUT OPERAND STATIONARY ACCELERATOR

    公开(公告)号:US20210019591A1

    公开(公告)日:2021-01-21

    申请号:US16511544

    申请日:2019-07-15

    IPC分类号: G06N3/04 G06N3/063

    摘要: Disclosed herein includes a system, a method, and a device for receiving input data to generate a plurality of outputs for a layer of a neural network. The plurality of outputs are arranged in a first array. Dimensions of the first array may be compared with dimensions of a processing unit (PE) array including a plurality of PEs. According to a result of the comparing, the first array is partitioned into subarrays by the processor. Each of the subarrays has dimensions less than or equal to the dimensions of the PE array. A first group of PEs in the PE array is assigned to a first one of the subarrays. A corresponding output of the plurality of outputs is generated using a portion of the input data by each PE of the first group of PEs assigned to the first one of the subarrays.

    SYSTEMS AND METHODS FOR READING AND WRITING SPARSE DATA IN A NEURAL NETWORK ACCELERATOR

    公开(公告)号:US20210011846A1

    公开(公告)日:2021-01-14

    申请号:US16509138

    申请日:2019-07-11

    IPC分类号: G06F12/04 G06N3/08

    摘要: Disclosed herein includes a system, a method, and a device for reading and writing sparse data in a neural network accelerator. A plurality of slices can be established to access a memory having an access size of a data word. A first slice can be configured to access a first side of the data word in memory. Circuitry can access a mask identifying byte positions within the data word having non-zero values. The circuitry can modify the data word to have non-zero byte values stored starting at an end of the first side, and any zero byte values stored in a remainder of the data word. A determination can be made whether a number of non-zero byte values is less than or equal to a first access size of the first slice. The circuitry can write the modified data word to the memory via at least the first slice.