MEMORY CONTROLLER AND NEAR-MEMORY SUPPORT FOR SPARSE ACCESSES

    公开(公告)号:US20240078017A1

    公开(公告)日:2024-03-07

    申请号:US18226932

    申请日:2023-07-27

    CPC classification number: G06F3/0613 G06F3/0659 G06F3/0673

    Abstract: A data processing system includes a data processor and a memory controller receiving memory access requests from the data processor and generating at least one memory access cycle to a memory system in response to the receiving. The memory controller includes a command queue and a sparse element processor. The command queue is for receiving and storing the memory access requests including a first memory access request including a small element request. The sparse element processor is for causing the memory controller to issue a second memory access request to the memory system in response to the first memory access request with a density greater than a density indicated by the first memory access request.

    Adaptive Scheduling of Memory and Processing-in-Memory Requests

    公开(公告)号:US20240220107A1

    公开(公告)日:2024-07-04

    申请号:US18090916

    申请日:2022-12-29

    CPC classification number: G06F3/061 G06F3/0659 G06F3/0673

    Abstract: Adaptive scheduling of memory requests and processing-in-memory requests is described. In accordance with the described techniques, a memory controller receives a plurality of processing-in-memory requests and a plurality of non-processing-in-memory requests from a host. The memory controller schedules an order of execution for the plurality of processing-in-memory requests and the plurality of non-processing-in-memory requests based at least in part on a processing-in-memory request stall threshold and a non-processing-in-memory request stall threshold. In response to a system switching (e.g., from executing processing-in-memory requests to executing non-processing-in-memory requests or from executing non-processing-in-memory requests to executing processing-in-memory requests), the memory controller modifies the processing-in-memory request stall threshold and the non-processing-in-memory request stall threshold. The memory controller continues scheduling an order of execution for subsequent requests received from the host using the modified stall thresholds.

    DRAM Row Management for Processing in Memory

    公开(公告)号:US20240004584A1

    公开(公告)日:2024-01-04

    申请号:US17855109

    申请日:2022-06-30

    CPC classification number: G06F3/0659 G06F3/0653 G06F3/0679 G06F3/0604

    Abstract: In accordance with described techniques for DRAM row management for processing in memory, a plurality of instructions are obtained for execution by a processing in memory component embedded in a dynamic random access memory. An instruction is identified that last accesses a row of the dynamic random access memory, and a subsequent instruction is identified that first accesses an additional row of the dynamic random access memory. A first command is issued to close the row and a second command is issued to open the additional row after the row is last accessed by the instruction.

    Adaptive scheduling of memory and processing-in-memory requests

    公开(公告)号:US12131026B2

    公开(公告)日:2024-10-29

    申请号:US18090916

    申请日:2022-12-29

    CPC classification number: G06F3/061 G06F3/0659 G06F3/0673

    Abstract: Adaptive scheduling of memory requests and processing-in-memory requests is described. In accordance with the described techniques, a memory controller receives a plurality of processing-in-memory requests and a plurality of non-processing-in-memory requests from a host. The memory controller schedules an order of execution for the plurality of processing-in-memory requests and the plurality of non-processing-in-memory requests based at least in part on a processing-in-memory request stall threshold and a non-processing-in-memory request stall threshold. In response to a system switching (e.g., from executing processing-in-memory requests to executing non-processing-in-memory requests or from executing non-processing-in-memory requests to executing processing-in-memory requests), the memory controller modifies the processing-in-memory request stall threshold and the non-processing-in-memory request stall threshold. The memory controller continues scheduling an order of execution for subsequent requests received from the host using the modified stall thresholds.

    Memory latency-aware GPU architecture

    公开(公告)号:US12067642B2

    公开(公告)日:2024-08-20

    申请号:US17030024

    申请日:2020-09-23

    Abstract: One or more processing units, such as a graphics processing unit (GPU), execute an application. A resource manager selectively allocates a first memory portion or a second memory portion to the processing units based on memory access characteristics. The first memory portion has a first latency that is lower that a second latency of the second memory portion. In some cases, the memory access characteristics indicate a latency sensitivity. In some cases, hints included in corresponding program code are used to determine the memory access characteristics. The memory access characteristics can also be determined by monitoring memory access requests, measuring a cache miss rate or a row buffer miss rate for the monitored memory access requests, and determining the memory access characteristics based on the cache miss rate or the row buffer miss rate.

    APPROACH FOR MANAGING NEAR-MEMORY PROCESSING COMMANDS AND NON-NEAR-MEMORY PROCESSING COMMANDS IN A MEMORY CONTROLLER

    公开(公告)号:US20230205706A1

    公开(公告)日:2023-06-29

    申请号:US17561454

    申请日:2021-12-23

    CPC classification number: G06F12/1009 G06F12/0882 G06F12/0207

    Abstract: An approach is provided for managing PIM commands and non-PIM commands at a memory controller. A memory controller enqueues PIM commands and non-PIM commands and selects the next command to process based upon various selection criteria. The memory controller maintains and uses a page table to properly configure memory elements, such as banks in a memory module, for the next memory command, whether a PIM command or a non-PIM command. The page table tracks the status of memory elements as of the most recent memory command that was issued. The page table includes an “All Bank” entry that indicates the status of banks after processing the most recent PIM command. For example, the All Banks entry indicates whether all the banks have a row open and if so, specifies the open row for all the banks.

    REUSING REMOTE REGISTERS IN PROCESSING IN MEMORY

    公开(公告)号:US20220206685A1

    公开(公告)日:2022-06-30

    申请号:US17139496

    申请日:2020-12-31

    Abstract: Systems, apparatuses, and methods for reusing remote registers in processing in memory (PIM) are disclosed. A system includes at least a host processor, a memory controller, and a PIM device. When the memory controller receives, from the host processor, an operation targeting the PIM device, the memory controller determines whether an optimization can be applied to the operation. The memory controller converts the operation into N PIM commands if the optimization is not applicable. Otherwise, the memory controller converts the operation into a N−1 PIM commands if the optimization is applicable. For example, if the operation involves reusing a constant value, a copy command can be omitted, resulting in memory bandwidth reduction and power consumption savings. In one scenario, the memory controller includes a constant-value cache, and the memory controller performs a lookup of the constant-value cache to determine if the optimization is applicable for a given operation.

    Scheduling Multiple Processing-in-Memory (PIM) Threads and Non-PIM Threads

    公开(公告)号:US20250004826A1

    公开(公告)日:2025-01-02

    申请号:US18214733

    申请日:2023-06-27

    Abstract: Scheduling requests of multiple processing-in-memory threads and requests of multiple non-processing-in-memory threads is described. In accordance with the described techniques, a memory controller receives a plurality of processing-in-memory threads and a plurality of non-processing-in-memory threads from a host. The memory controller schedules an order of execution for requests of the plurality of processing-in-memory threads and requests of the plurality of non-processing-in-memory threads based on a priority associated with each of the requests and a current operating mode of the system. Requests are maintained in queues at the memory controller and are individually assigned a priority level based on time enqueued at the memory controller. Requests of a different mode than a current operating mode of the system are delayed for scheduling until at least one different mode request is escalated to a maximum priority value, at which point the memory controller initiates a system mode switch.

Patent Agency Ranking