Efficient fill-buffer data forwarding supporting high frequencies
    1.
    发明授权
    Efficient fill-buffer data forwarding supporting high frequencies 有权
    高效的填充缓冲区数据转发支持高频率

    公开(公告)号:US09418018B2

    公开(公告)日:2016-08-16

    申请号:US14337211

    申请日:2014-07-21

    Abstract: A Fill Buffer (FB) based data forwarding scheme that stores a combination of Virtual Address (VA), TLB (Translation Look-aside Buffer) entry# or an indication of a location of a Page Table Entry (PTE) in the TLB, and a TLB page size information in the FB and uses these values to expedite FB forwarding. Load (Ld) operations send their non-translated VA for an early comparison against the VA entries in the FB, and are then further qualified with the TLB entry# to determine a “hit.” This hit determination is fast and enables FB forwarding at higher frequencies without waiting for a comparison of Physical Addresses (PA) to conclude in the FB. A safety mechanism may detect a false hit in the FB and generate a late load cancel indication to cancel the earlier-started FB forwarding by ignoring the data obtained as a result of the Ld execution. The Ld is then re-executed later and tries to complete successfully with the correct data.

    Abstract translation: 一种基于填充缓冲器(FB)的数据转发方案,其存储虚拟地址(VA),TLB(翻译后备缓冲区)条目#或页面表项(PTE)在TLB中的位置的指示的组合,以及 FB中的TLB页面大小信息,并使用这些值来加速FB转发。 加载(Ld)操作发送他们的非翻译的VA,以便与FB中的VA条目进行早期比较,然后进一步通过TLB条目#进行限定,以确定“命中”。该命中确定速度很快,可以使FB转发 更高的频率,而不等待物理地址(PA)的比较结束于FB。 安全机制可以检测到FB中的错误命中,并产生一个晚期负载取消指示,以通过忽略由于执行Ld而获得的数据来取消较早启动的FB转发。 然后,Ld稍后重新执行,并尝试使用正确的数据成功完成。

    Cache replacement policy methods and systems
    2.
    发明授权
    Cache replacement policy methods and systems 有权
    缓存替换策略方法和系统

    公开(公告)号:US09418019B2

    公开(公告)日:2016-08-16

    申请号:US14269032

    申请日:2014-05-02

    CPC classification number: G06F12/121

    Abstract: An embodiment includes a system, comprising: a cache configured to store a plurality of cache lines, each cache line associated with a priority state from among N priority states; and a controller coupled to the cache and configured to: search the cache lines for a cache line with a lowest priority state of the priority states to use as a victim cache line; if the cache line with the lowest priority state is not found, reduce the priority state of at least one of the cache lines; and select a random cache line of the cache lines as the victim cache line if, after performing each of the searching of the cache lines and the reducing of the priority state of at least one cache line K times, the cache line with the lowest priority state is not found. N is an integer greater than or equal to 3; and K is an integer greater than or equal to 1 and less than or equal to N−2.

    Abstract translation: 一个实施例包括一种系统,包括:高速缓存,被配置为存储多个高速缓存行,每个高速缓存行与来自N个优先级状态的优先级状态相关联; 以及控制器,其耦合到所述高速缓存并且被配置为:搜索所述高速缓存行中具有所述优先级状态的最低优先级状态以用作所述高速缓存行的高速缓存行; 如果没有找到具有最低优先级状态的高速缓存行,则减少至少一个高速缓存行的优先级状态; 并选择高速缓存行的随机高速缓存行作为受害缓存行,如果在执行高速缓存行的每个搜索和减少至少一个高速缓存行K的优先级状态K次之后,具有最低优先级的高速缓存行 状态未找到。 N是大于或等于3的整数; 并且K是大于或等于1且小于或等于N-2的整数。

    Pre-fetch chaining
    4.
    发明授权
    Pre-fetch chaining 有权
    预取链接

    公开(公告)号:US09569361B2

    公开(公告)日:2017-02-14

    申请号:US14325343

    申请日:2014-07-07

    CPC classification number: G06F12/0862 G06F12/10 G06F2212/6022

    Abstract: According to one general aspect, an apparatus may include a cache pre-fetcher, and a pre-fetch scheduler. The cache pre-fetcher may be configured to predict, based at least in part upon a virtual address, data to be retrieved from a memory system. The pre-fetch scheduler may be configured to convert the virtual address of the data to a physical address of the data, and request the data from one of a plurality of levels of the memory system. The memory system may include a plurality of levels, each level of the memory system configured to store data.

    Abstract translation: 根据一个一般方面,设备可以包括高速缓存预取器和预取调度器。 高速缓存预取器可以被配置为至少部分地基于虚拟地址预测要从存储器系统检索的数据。 预取调度器可以被配置为将数据的虚拟地址转换为数据的物理地址,并且从存储器系统的多个级别之一请求数据。 存储器系统可以包括多个级别,存储器系统的每个级别被配置为存储数据。

Patent Agency Ranking