QUANTIZED NEURAL NETWORK ARCHITECTURE
    1.
    发明公开

    公开(公告)号:US20240104356A1

    公开(公告)日:2024-03-28

    申请号:US17934476

    申请日:2022-09-22

    CPC classification number: G06N3/0481

    Abstract: Certain aspects of the present disclosure provide techniques and apparatus for quantized machine learning. A quantized input matrix is accessed at a layer of a neural network, and a first interim value is generated in an accumulator by performing matrix multiplication, using the accumulator, of the quantized input matrix and a quantized weight matrix associated with the layer of the neural network. The first interim value is normalized based at least in part on one or more leading sign bits of the first interim value, and the normalized first interim value is dequantized. A second interim value is generated by applying a rounded right-shift operation to the dequantized normalized first interim value, and activation data is generated by applying an activation function to the second interim value.

    PERMUTATION INSTRUCTION
    2.
    发明申请

    公开(公告)号:US20230102564A1

    公开(公告)日:2023-03-30

    申请号:US17448816

    申请日:2021-09-24

    Abstract: A device includes a vector register file, a memory, and a processor. The vector register file includes a plurality of vector registers. The memory is configured to store a permutation instruction. The processor is configured to access a periodicity parameter of the permutation instruction. The periodicity parameter indicates a count of a plurality of data sources that contain source data for the permutation instruction. The processor is also configured to execute the permutation instruction to, for each particular element of multiple elements of a first permutation result register of the plurality of vector registers, select a data source of the plurality of data sources based at least in part on the count of the plurality of data sources and populate the particular element based on a value in a corresponding element of the selected data source.

    MEMORY ORGANIZATION AND ACCESS FOR EFFICIENT MATRIX OPERATIONS

    公开(公告)号:US20240330212A1

    公开(公告)日:2024-10-03

    申请号:US18192033

    申请日:2023-03-29

    CPC classification number: G06F13/1657 G06F13/1668 G06F15/17375

    Abstract: Certain aspects of the present disclosure provide techniques and apparatus for efficiently accessing memory in a computing system. An example method includes organizing a plurality of physical memory banks having a base size into a plurality of logical memory banks. A request to execute operations on the plurality of physical memory banks is received. The request to execute the operations comprises a request to interact with data having a sample width based on the base size. Responsive to receiving the request to execute the operations, the operations are executed on one or more logical memory banks of the plurality of logical memory banks via a memory crossbar shared across the plurality of logical memory banks. An amount of the data on which the operations are executed is a multiple of the sample width, and each logical memory bank has a size based on the base size and a multiplier value.

    Instruction Set Architecture for Neural Network Quantization and Packing

    公开(公告)号:US20230350678A1

    公开(公告)日:2023-11-02

    申请号:US17732361

    申请日:2022-04-28

    CPC classification number: G06F9/30101 G06N3/04

    Abstract: This application is directed to using a single instruction to initiate a sequence of computational operations related to a neural network. An electronic device receives a single instruction to apply a neural network operation to a set of M-bit elements stored in one or more input vector registers. In response to the single instruction, the electronic device implements the neural network operation on the set of M-bit elements to generate a set of P-bit elements by obtaining the set of M-bit elements from the one or more input vector registers, quantizing each of the set of M-bit elements from M bits to P bits, and packing the set of P-bit elements into an output vector register. P is smaller than M. In some embodiments, the neural network operation is a quantization operation including at least a multiplication with a quantization factor and an addition with a zero point.

Patent Agency Ranking