METHOD FOR EMBEDDING ROWS PREFETCHING IN RECOMMENDATION MODELS

    公开(公告)号:US20230401154A1

    公开(公告)日:2023-12-14

    申请号:US17835810

    申请日:2022-06-08

    CPC classification number: G06F12/0862 G06F2212/602

    Abstract: A system and method for efficiently accessing sparse data for a workload are described. In various implementations, a computing system includes an integrated circuit and a memory for storing tasks of a workload that includes sparse accesses of data items stored in one or more tables. The integrated circuit receives a user query, and generates a result based on multiple data items targeted by the user query. To reduce the latency of processing the workload even with sparse lookup operations performed on the one or more tables, a prefetch engine of the integrated circuit stores a subset of data items in prefetch data storage. The prefetch engine also determines which data items to store in the prefetch data storage based on one or more of a frequency of reuse, a distance or latency of access of a corresponding table of the one more tables, or other.

    Cache management based on access type priority

    公开(公告)号:US11768779B2

    公开(公告)日:2023-09-26

    申请号:US16716194

    申请日:2019-12-16

    Abstract: Systems, apparatuses, and methods for cache management based on access type priority are disclosed. A system includes at least a processor and a cache. During a program execution phase, certain access types are more likely to cause demand hits in the cache than others. Demand hits are load and store hits to the cache. A run-time profiling mechanism is employed to find which access types are more likely to cause demand hits. Based on the profiling results, the cache lines that will likely be accessed in the future are retained based on their most recent access type. The goal is to increase demand hits and thereby improve system performance. An efficient cache replacement policy can potentially reduce redundant data movement, thereby improving system performance and reducing energy consumption.

    Chiplet-Level Performance Information for Configuring Chiplets in a Processor

    公开(公告)号:US20230153218A1

    公开(公告)日:2023-05-18

    申请号:US17526218

    申请日:2021-11-15

    CPC classification number: G06F11/3051 G06F15/80 G06F11/3024

    Abstract: A processor includes a controller and plurality of chiplets, each chiplet including a plurality of processor cores. The controller provides chiplet-level performance information for the chiplets that identifies a performance of each chiplet at each of a plurality of performance levels for specified sets of processor cores on that chiplet. The controller receives an identification of one or more selected chiplets from among the plurality of chiplets for which a specified number of processor cores are to be configured at a given performance level, the one or more selected chiplets having been selected based on the chiplet-level performance information and performance requirements. The controller configures the specified number of processor cores of the one or more selected chiplets at the given performance level. A task is then run on the specified number of processor cores of the one or more selected chiplets at the given performance level.

    CACHE MANAGEMENT BASED ON ACCESS TYPE PRIORITY

    公开(公告)号:US20210182216A1

    公开(公告)日:2021-06-17

    申请号:US16716194

    申请日:2019-12-16

    Abstract: Systems, apparatuses, and methods for cache management based on access type priority are disclosed. A system includes at least a processor and a cache. During a program execution phase, certain access types are more likely to cause demand hits in the cache than others. Demand hits are load and store hits to the cache. A run-time profiling mechanism is employed to find which access types are more likely to cause demand hits. Based on the profiling results, the cache lines that will likely be accessed in the future are retained based on their most recent access type. The goal is to increase demand hits and thereby improve system performance. An efficient cache replacement policy can potentially reduce redundant data movement, thereby improving system performance and reducing energy consumption.

    MEMORY REQUEST PRIORITY ASSIGNMENT TECHNIQUES FOR PARALLEL PROCESSORS

    公开(公告)号:US20210173796A1

    公开(公告)日:2021-06-10

    申请号:US16706421

    申请日:2019-12-06

    Abstract: Systems, apparatuses, and methods for implementing memory request priority assignment techniques for parallel processors are disclosed. A system includes at least a parallel processor coupled to a memory subsystem, where the parallel processor includes at least a plurality of compute units for executing wavefronts in lock-step. The parallel processor assigns priorities to memory requests of wavefronts on a per-work-item basis by indexing into a first priority vector, with the index generated based on lane-specific information. If a given event is detected, a second priority vector is generated by applying a given priority promotion vector to the first priority vector. Then, for subsequent wavefronts, memory requests are assigned priorities by indexing into the second priority vector with lane-specific information. The use of priority vectors to assign priorities to memory requests helps to reduce the memory divergence problem experienced by different work-items of a wavefront.

    Hierarchical register file at a graphics processing unit

    公开(公告)号:US10853904B2

    公开(公告)日:2020-12-01

    申请号:US15079543

    申请日:2016-03-24

    Abstract: A processor employs a hierarchical register file for a graphics processing unit (GPU). A top level of the hierarchical register file is stored at a local memory of the GPU (e.g., a memory on the same integrated circuit die as the GPU). Lower levels of the hierarchical register file are stored at a different, larger memory, such as a remote memory located on a different die than the GPU. A register file control module monitors the status of in-flight wavefronts at the GPU, and in particular whether each in-flight wavefront is active, predicted to be become active, or inactive. The register file control module places execution data for active and predicted-active wavefronts in the top level of the hierarchical register file and places execution data for inactive wavefronts at lower levels of the hierarchical register file.

Patent Agency Ranking