Performance By Retaining High Locality Data In Higher Level Cache Memory

    公开(公告)号:US20190087345A1

    公开(公告)日:2019-03-21

    申请号:US15709884

    申请日:2017-09-20

    Abstract: Various aspects include methods for implementing retaining high locality data in a higher level cache memory on a computing device. Various aspects may include receiving a cache access request for a first cache line in the higher level cache memory indicating a locality of the first cache line, determining whether the access request indicates high locality, and setting a high locality indicator of the first cache line in response to determining that the cache access request indicates high locality. Various aspects may include determining whether a lower level cache memory hit counter of a first cache line of a first cache exceeds a lower level cache locality threshold, setting a high locality indicator of the first cache line in response to determining that the lower level cache memory hit counter exceeds the lower level cache locality threshold and resetting the lower level cache memory hit counter of the first cache.

    DATA RE-ENCODING FOR ENERGY-EFFICIENT DATA TRANSFER IN A COMPUTING DEVICE

    公开(公告)号:US20230031310A1

    公开(公告)日:2023-02-02

    申请号:US17390215

    申请日:2021-07-30

    Abstract: The energy consumed by data transfer in a computing device may be reduced by transferring data that has been encoded in a manner that reduces the number of one “1” data values, the number of signal level transitions, or both. A data destination component of the computing device may receive data encoded in such a manner from a data source component of the computing device over a data communication interconnect, such as an off-chip interconnect. The data may be encoded using minimum Hamming weight encoding, which reduces the number of one “1” data values. The received data may be decoded using minimum Hamming weight decoding. For other computing devices, the data may be encoded using maximum Hamming weight encoding, which increases the number of one “1” data values while reducing the number of zero “0” values, if reducing the number of zero values reduces energy consumption.

    ERROR HANDLING FOR REPROJECTION TIMELINE

    公开(公告)号:US20240386980A1

    公开(公告)日:2024-11-21

    申请号:US18522098

    申请日:2023-11-28

    Abstract: Aspects presented herein relate to methods and devices for data or graphics processing including an apparatus, e.g., a graphics processing unit (GPU). The apparatus may obtain an indication of a data write for data associated with data processing. The apparatus may write, based on the indication, the data associated with the data processing to a memory address. The apparatus may receive a read request for the data including the memory address, where the read request is associated with a read with invalidate process. The apparatus may retrieve, based on the read request, the data from at least one of a first cache or at least one second memory, where the retrieval of the data is based on a timing of the indication of the data write. The apparatus may output the retrieved data from at least one of the first cache or the at least one second memory.

    BANDWIDTH/RESOURCE MANAGEMENT FOR MULTITHREADED PROCESSORS
    9.
    发明申请
    BANDWIDTH/RESOURCE MANAGEMENT FOR MULTITHREADED PROCESSORS 审中-公开
    多重处理器的宽带/资源管理

    公开(公告)号:US20160350152A1

    公开(公告)日:2016-12-01

    申请号:US14866012

    申请日:2015-09-25

    Abstract: Systems and methods relate to managing shared resources in a multithreaded processor comprising two or more processing threads. Danger levels for the two or more threads are determined, wherein the danger level of a thread is based on a potential failure of the thread to meet a deadline due to unavailability of a shared resource. Priority levels associated with the two or more threads are also determined, wherein the priority level is higher for a thread whose failure to meet a deadline is unacceptable and the priority level is lower for a thread whose failure to meet a deadline is acceptable. The two or more threads are scheduled based at least on the determined danger levels for the two or more threads and priority levels associated with the two or more threads.

    Abstract translation: 系统和方法涉及在包括两个或多个处理线程的多线程处理器中管理共享资源。 确定两个或更多个线程的危险水平,其中线程的危险等级基于线程由于不可用的共享资源而遇到期限的潜在故障。 还确定与两个或更多个线程相关联的优先级,其中对于不能达到期限的线程而言,优先级高于不能接受的线程,并且对于不满足截止期限的线程,优先级较低。 至少基于与两个或多个线程相关联的两个或多个线程的确定的危险等级和优先级,来调度两个或更多个线程。

    Cache Bank Spreading For Compression Algorithms
    10.
    发明申请
    Cache Bank Spreading For Compression Algorithms 有权
    缓存库扩展用于压缩算法

    公开(公告)号:US20160077973A1

    公开(公告)日:2016-03-17

    申请号:US14483902

    申请日:2014-09-11

    Abstract: Aspects include computing devices, systems, and methods for implementing a cache memory access requests for compressed data using cache bank spreading. In an aspect, cache bank spreading may include determining whether the compressed data of the cache memory access fits on a single cache bank. In response to determining that the compressed data fits on a single cache bank, a cache bank spreading value may be calculated to replace/reinstate bank selection bits of the physical address for a cache memory of the cache memory access request that may be cleared during data compression. A cache bank spreading address in the physical space of the cache memory may include the physical address of the cache memory access request plus the reinstated bank selection bits. The cache bank spreading address may be used to read compressed data from or write compressed data to the cache memory device.

    Abstract translation: 方面包括计算设备,系统和方法,用于使用高速缓存存储体扩展来实现用于压缩数据的高速缓存存储器访问请求。 在一方面,高速缓存存储体扩展可以包括确定高速缓冲存储器访问的压缩数据是否适合于单个高速缓存存储体。 响应于确定压缩数据适合于单个高速缓存存储体,可以计算高速缓存存储体扩展值以代替/恢复可以在数据期间清除的高速缓冲存储器访问请求的高速缓冲存储器的物理地址的存储体选择位 压缩。 高速缓冲存储器的物理空间中的高速缓存存储体扩展地址可以包括高速缓冲存储器访问请求的物理地址加上恢复的存储体选择位。 缓存存储体扩展地址可用于从压缩数据读取压缩数据或将压缩数据写入缓存存储器件。

Patent Agency Ranking