GENERATIVE GRAPH MODELING FRAMEWORK
    4.
    发明公开

    公开(公告)号:US20240152799A1

    公开(公告)日:2024-05-09

    申请号:US18051364

    申请日:2022-10-31

    申请人: ADOBE INC.

    IPC分类号: G06N20/00 G06F7/78

    CPC分类号: G06N20/00 G06F7/78

    摘要: Systems and methods for data augmentation are described. Embodiments of the present disclosure receive a dataset that includes a plurality of nodes and a plurality of edges, wherein each of the plurality of edges connects two of the plurality of nodes; compute a first nonnegative matrix representing a homophilous cluster affinity; compute a second nonnegative matrix representing a heterophilous cluster affinity; compute a probability of an additional edge based on the dataset using a machine learning model that represents a homophilous cluster and a heterophilous cluster based on the first nonnegative matrix and the second nonnegative matrix; and generate an augmented dataset including the plurality of nodes, the plurality of edges, and the additional edge.

    METHOD FOR APPROXIMATIVELY DETERMINING A SCALAR PRODUCT USING A MATRIX CIRCUIT

    公开(公告)号:US20240152332A1

    公开(公告)日:2024-05-09

    申请号:US18500210

    申请日:2023-11-02

    申请人: Robert Bosch GmbH

    IPC分类号: G06F7/78 G06F7/499

    CPC分类号: G06F7/78 G06F7/49942

    摘要: A method for approximatively determining at least one scalar product of at least one input vector with a weight vector. Input components of the input vector and weight components of the weight vector are present in binary form. At least one matrix circuit is used, wherein the memory cells are programmed according to bits of the weight components. Bits with the same significance of at least a portion of the weight components are respectively programmed in memory cells of the same column. For each of one or more subsets of the input components, a bit sum determination is carried out. To a corresponding subset of the row lines, voltages are applied according to bits with the same significance of the respective subset of the input components and a limited bit sum is determined as the output value of the respective analog-to-digital converter.

    MATRIX COMPUTING DEVICE AND OPERATION METHOD THEREOF

    公开(公告)号:US20240134931A1

    公开(公告)日:2024-04-25

    申请号:US18076407

    申请日:2022-12-07

    摘要: A matrix computing device and an operation method for the matrix computing device are provided. The matrix computing device includes a storage unit, a control circuit, and a computing circuit. The storage unit includes a weight matrix. The control circuit re-orders an arrangement order of weights in the weight matrix according to a shape of an output matrix to determine a weight readout order of the weights. The computing circuit receives the weights based on the weight readout order, and performs a matrix computation on the weights and an input matrix to generate a computing matrix. The control circuit performs a reshape transformation on the computing matrix to generate the output matrix, and writes the output matrix to the storage unit.

    METHODS, SYSTEMS, AND MEDIA FOR LOW-BIT NEURAL NETWORKS USING BIT SHIFT OPERATIONS

    公开(公告)号:US20240104342A1

    公开(公告)日:2024-03-28

    申请号:US18521425

    申请日:2023-11-28

    IPC分类号: G06N3/04 G06F7/499 G06F7/78

    CPC分类号: G06N3/04 G06F7/49942 G06F7/78

    摘要: Methods, systems and computer readable media using hardware-efficient bit-shift operations for computing the output of a low-bit neural network layer. A dense shift inner product operator (or dense shift IPO) using bit shifting in place of multiplication replaces the inner product operator that is conventionally used to compute the output of a neural network layer. Dense shift neural networks may have weights encoded using a low-bit dense shift encoding. A dedicated neural network accelerator is designed to compute the output of a dense shift neural network layer using dense shift IPOs. A Sign-Sparse-Shift (S3) training technique trains a low-bit neural network using dense shift IPOs or other bit shift operations in computing its outputs.

    Techniques for transposing a matrix using a memory block

    公开(公告)号:US11928443B2

    公开(公告)日:2024-03-12

    申请号:US16721458

    申请日:2019-12-19

    申请人: Intel Corporation

    发明人: Hong Shan Neoh

    IPC分类号: G06F7/78

    CPC分类号: G06F7/78

    摘要: A circuit system includes a memory block and first and second processing circuits. The first and second processing circuits store a matrix in the memory block by concurrently writing elements in first and second rows or columns of the matrix to first and second regions of storage in the memory block, respectively. The first and second processing circuits transpose the matrix to generate a transposed matrix by concurrently reading elements in first and second rows or columns of the transposed matrix from third and fourth regions of storage in the memory block, respectively.

    TRANSPOSED CONVOLUTION USING SYSTOLIC ARRAY
    10.
    发明公开

    公开(公告)号:US20230306249A1

    公开(公告)日:2023-09-28

    申请号:US18134726

    申请日:2023-04-14

    摘要: In one example, a neural network accelerator can execute a set of instructions to: load a first weight data element from a memory into a systolic array, the first weight data element having first coordinates; extract, from the instructions, information indicating a first subset of input data elements to be obtained from the memory, the first subset being based on a stride of a transposed convolution operation and second coordinates of first weight data element in a rotated array of weight data elements; based on the information, obtain the first subset of input data elements from the memory; load the first subset of input data elements into the systolic array; and control the systolic array to perform first computations based on the first weight data element and the first subset of input data elements to generate output data elements of an array of output data elements.