OPERATION ACCELERATOR
    1.
    发明申请

    公开(公告)号:US20220327181A1

    公开(公告)日:2022-10-13

    申请号:US17726410

    申请日:2022-04-21

    Abstract: The present invention relates to the field of data calculation technologies, and discloses an operation accelerator, to reduce time for performing a multiplication operation on two N*N matrices. The operation accelerator includes: a first memory, a second memory, an operation circuit, and a controller. The operation circuit may perform data communication with the first memory and the second memory by using a bus. The operation circuit is configured to: extract matrix data from the first memory and the second memory, and perform a multiplication operation. The controller is configured to control, according to a preset program or instruction, the operation circuit to complete the multiplication operation. The operation accelerator may be configured to perform a multiplication operation on two matrices.

    MATRIX MULTIPLIER
    2.
    发明申请

    公开(公告)号:US20220245218A1

    公开(公告)日:2022-08-04

    申请号:US17725492

    申请日:2022-04-20

    Abstract: Embodiments of the present invention disclose a matrix multiplier, and relate to the field of data computing technologies, so as to divide two matrices into blocks for computation. The matrix multiplier includes: a first memory, a second memory, an operation circuit, and a controller, where the operation circuit, the first memory, and the second memory may perform data communication by using a bus; and the controller is configured to control, according to a preset program or instruction, a first matrix and a second matrix to be divided into blocks, and control the operation circuit to perform a multiplication operation on corresponding blocks in the first memory and the second memory based on block division results of the controller. The matrix multiplier may be configured to perform a multiplication operation on two matrices.

    Operation accelerator
    3.
    发明授权

    公开(公告)号:US11720646B2

    公开(公告)日:2023-08-08

    申请号:US17726410

    申请日:2022-04-21

    CPC classification number: G06F17/16 G06F7/50 G06F7/523

    Abstract: The present invention in the field of data calculation technologies, discloses an operation accelerator, to reduce time for performing a multiplication operation on two N*N matrices. The operation accelerator includes: a first memory, a second memory, an operation circuit, and a controller. The operation circuit performs data communication with the first memory and the second memory by using a bus. The operation circuit is configured to: extract matrix data from the first memory and the second memory, and perform a multiplication operation. The controller is configured to control, according to a preset program or instruction, the operation circuit to complete the multiplication operation. The operation accelerator is configured to perform a multiplication operation on two matrices.

    Matrix processing method and apparatus, and logic circuit

    公开(公告)号:US11250108B2

    公开(公告)日:2022-02-15

    申请号:US16869837

    申请日:2020-05-08

    Abstract: A matrix processing method includes: determining a quantity of non-zero elements in a to-be-processed matrix, where the to-be-processed matrix is a one-dimensional matrix; generating a distribution matrix of the to-be-processed matrix, where the distribution matrix is used to indicate a position of a non-zero element in the to-be-processed matrix; combining the quantity of non-zero elements, values of all non-zero elements in the to-be-processed matrix arranged sequentially, and the distribution matrix, to obtain a compressed matrix of the to-be-processed matrix.

    MATRIX MULTIPLIER
    5.
    发明申请
    MATRIX MULTIPLIER 审中-公开

    公开(公告)号:US20200334322A1

    公开(公告)日:2020-10-22

    申请号:US16915915

    申请日:2020-06-29

    Abstract: Embodiments of the present invention disclose a matrix multiplier, and relate to the field of data computing technologies, so as to divide two matrices into blocks for computation. The matrix multiplier includes: a first memory, a second memory, an operation circuit, and a controller, where the operation circuit, the first memory, and the second memory may perform data communication by using a bus; and the controller is configured to control, according to a preset program or instruction, a first matrix and a second matrix to be divided into blocks, and control the operation circuit to perform a multiplication operation on corresponding blocks in the first memory and the second memory based on block division results of the controller. The matrix multiplier may be configured to perform a multiplication operation on two matrices.

    MATRIX PROCESSING METHOD AND APPARATUS, AND LOGIC CIRCUIT

    公开(公告)号:US20200265108A1

    公开(公告)日:2020-08-20

    申请号:US16869837

    申请日:2020-05-08

    Abstract: A matrix processing method includes: determining a quantity of non-zero elements in a to-be-processed matrix, where the to-be-processed matrix is a one-dimensional matrix; generating a distribution matrix of the to-be-processed matrix, where the distribution matrix is used to indicate a position of a non-zero element in the to-be-processed matrix; combining the quantity of non-zero elements, values of all non-zero elements in the to-be-processed matrix arranged sequentially, and the distribution matrix, to obtain a compressed matrix of the to-be-processed matrix.

    SCHEDULING APPARATUS AND METHOD, AND RELATED DEVICE

    公开(公告)号:US20250094218A1

    公开(公告)日:2025-03-20

    申请号:US18963706

    申请日:2024-11-28

    Abstract: This disclosure provides a scheduling apparatus and method, and a related device. The scheduling apparatus includes a dispatcher coupled to an execution apparatus. The dispatcher includes a plurality of first buffers, each of the plurality of first buffers is configured to cache target tasks of one task type, the target tasks include a thread subtask and a cache management operation task, and the cache management operation task indicates to perform a cache management operation on input data or output data of the thread subtask. The dispatcher is configured to: receive a plurality of first target tasks, and cache the plurality of first target tasks in the plurality of first buffers based on task types; and dispatch a plurality of second target tasks to the execution apparatus.

    Method and Bus for Accessing Dynamic Random Access Memory

    公开(公告)号:US20170262404A1

    公开(公告)日:2017-09-14

    申请号:US15454014

    申请日:2017-03-09

    CPC classification number: G06F15/167 G06F13/16 Y02D10/14

    Abstract: Embodiments of the present disclosure provide a method and bus for accessing a dynamic random access memory (DRAM). The embodiments include receiving an access instruction, where the access instruction includes an access address, the access address includes a physical address, and a first field and a second field that are additionally set, the first field is used to indicate an interleaving mode, the interleaving mode indicates a manner of selecting an access channel, the second field is used to indicate an interleaving granularity, and the interleaving granularity indicates a capacity of an address space corresponding to the access channel; determining, according to the first field and the second field, the access channel and an address corresponding to the access channel; and accessing the DRAM according to the access channel and the address corresponding to the access channel.

    Method, apparatus, and system for operating shared resource in asynchronous multiprocessing system
    10.
    发明授权
    Method, apparatus, and system for operating shared resource in asynchronous multiprocessing system 有权
    在异步多处理系统中运行共享资源的方法,装置和系统

    公开(公告)号:US09519652B2

    公开(公告)日:2016-12-13

    申请号:US13870586

    申请日:2013-04-25

    CPC classification number: G06F17/30171 G06F13/1663

    Abstract: Technical effects of a method, an apparatus, and a system for operating a shared resource in an asynchronous multiprocessing system that are provided in the present invention are as follows: A processor in an asynchronous multiprocessing system implements an operation on a shared resource by locking a hardware resource lock, and the hardware resource lock is implemented by a register; in this way, a bus in the asynchronous multiprocessing system does not need to support a synchronization operation, and the processor also does not need to have a feature of supporting a synchronization operation, and is capable of implementing the operation on the shared resource only in a manner of accessing the register, which simplifies the operation on the shared resource by the processor in the asynchronous multiprocessing system, enlarges a selection range of the processor in the asynchronous multiprocessing system, and further improves flexibility of the asynchronous multiprocessing system.

    Abstract translation: 在本发明中提供的用于在异步多处理系统中操作共享资源的方法,装置和系统的技术效果如下:异步多处理系统中的处理器通过锁定共享资源来实现对共享资源的操作 硬件资源锁定,硬件资源锁定由寄存器实现; 以这种方式,异步多处理系统中的总线不需要支持同步操作,并且处理器也不需要具有支持同步操作的特征,并且能够仅在共享资源中实现对共享资源的操作 访问寄存器的方式简化了异步多处理系统中的处理器对共享资源的操作,扩大了异步多处理系统中的处理器的选择范围,并进一步提高了异步多处理系统的灵活性。

Patent Agency Ranking