STRIDE-BASED TRANSLATION LOOKASIDE BUFFER (TLB) PREFETCHING WITH ADAPTIVE OFFSET
    61.
    发明申请
    STRIDE-BASED TRANSLATION LOOKASIDE BUFFER (TLB) PREFETCHING WITH ADAPTIVE OFFSET 有权
    基于STRIDE的翻译LOOKASIDE BUFFER(TLB)使用自适应偏移

    公开(公告)号:US20140281351A1

    公开(公告)日:2014-09-18

    申请号:US13799582

    申请日:2013-03-13

    IPC分类号: G06F12/10

    摘要: A processing device implementing stride-based translation lookaside buffer (TLB) prefetching with adaptive offset is disclosed. A processing device of the disclosure includes a data prefetcher to generate a data prefetch address based on a linear address, a stride, or a prefetch distance, the data prefetch address associated with a data prefetch request, and a TLB prefetch address computation component to generate a TLB prefetch address based on the linear address, the stride, the prefetch distance, or an adaptive offset. The processing device also includes a cross page detection component to determine that the data prefetch address or the TLB prefetch address cross a page boundary associated with the linear address, and cause a TLB prefetch request to be written to a TLB request queue, the TLB prefetch request for translation of an address of a linear page number (LPN) based on the data prefetch address or the TLB prefetch address.

    摘要翻译: 公开了一种利用自适应偏移实现基于步幅翻译后备缓冲器(TLB)预取的处理装置。 本公开的处理设备包括数据预取器,用于基于线性地址,步幅或预取距离生成数据预取地址,与数据预取请求相关联的数据预取地址以及TLB预取地址计算组件以产生 基于线性地址,步幅,预取距离或自适应偏移的TLB预取地址。 该处理设备还包括一个跨页检测组件,用于确定数据预取地址或TLB预取地址与与线性地址相关联的页边界交叉,并且使TLB预取请求被写入TLB请求队列,TLB预取 基于数据预取地址或TLB预取地址来请求翻译线性页码(LPN)的地址。

    TRACKING AND ELIMINATING BAD PREFETCHES GENERATED BY A STRIDE PREFETCHER
    62.
    发明申请
    TRACKING AND ELIMINATING BAD PREFETCHES GENERATED BY A STRIDE PREFETCHER 有权
    跟踪和消除一个前提条件产生的条件

    公开(公告)号:US20140237212A1

    公开(公告)日:2014-08-21

    申请号:US13773166

    申请日:2013-02-21

    IPC分类号: G06F12/02

    摘要: A method, an apparatus, and a non-transitory computer readable medium for tracking prefetches generated by a stride prefetcher are presented. Responsive to a prefetcher table entry for an address stream locking on a stride, prefetch suppression logic is updated and prefetches from the prefetcher table entry are suppressed when suppression is enabled for that prefetcher table entry. A stride is a difference between consecutive addresses in the address stream. A prefetch request is issued from the prefetcher table entry when suppression is not enabled for that prefetcher table entry.

    摘要翻译: 提出了一种用于跟踪由步幅预取器产生的预取的方法,装置和非暂时计算机可读介质。 响应于在步幅上锁定的地址流的预取器表条目,预取抑制逻辑被更新,并且对于该预取器表条目启用抑制时,来自预取器表条目的预取被抑制。 步幅是地址流中连续地址之间的差异。 当预取器表项不启用抑制时,从预取器表项发出预取请求。

    Bounding box prefetcher
    64.
    发明授权
    Bounding box prefetcher 有权
    边框预取器

    公开(公告)号:US08762649B2

    公开(公告)日:2014-06-24

    申请号:US13033765

    申请日:2011-02-24

    IPC分类号: G06F12/08

    摘要: A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.

    摘要翻译: 具有高速缓冲存储器的微处理器中的数据预取器接收每个存储器块内的地址的存储器访问。 访问地址作为时间的函数是非单调递增或递减的。 当接收到访问时,预取器维护访问的最大地址和最小地址以及对最大和最小地址的更改的计数,并维护由存储器块内的访问地址所牵涉的最近访问的高速缓存行的历史。 预取器还基于计数确定主要的访问方向,并且基于历史确定主要的访问模式。 预取器还根据主要访问模式在主存取方向上预取入高速缓冲存储器,历史指示的存储器块的高速缓存行尚未被最近访问。

    DYNAMIC EVALUATION AND RECONFIGURATION OF A DATA PREFETCHER
    65.
    发明申请
    DYNAMIC EVALUATION AND RECONFIGURATION OF A DATA PREFETCHER 有权
    数据预处理的动态评估和重新配置

    公开(公告)号:US20140129780A1

    公开(公告)日:2014-05-08

    申请号:US13671801

    申请日:2012-11-08

    IPC分类号: G06F12/08

    摘要: Methods and systems for prefetching data for a processor are provided. A system is configured for and a method includes selecting one of a first prefetching control logic and a second prefetching control logic of the processor as a candidate feature, capturing the performance metric of the processor over an inactive sample period when the candidate feature is inactive, capturing a performance metric of the processor over an active sample period when the candidate feature is active, comparing the performance metric of the processor for the active and inactive sample periods, and setting a status of the candidate feature as enabled when the performance metric in the active period indicates improvement over the performance metric in the inactive period, and as disabled when the performance metric in the inactive period indicates improvement over the performance metric in the active period.

    摘要翻译: 提供了用于为处理器预取数据的方法和系统。 系统被配置用于并且方法包括选择处理器的第一预取控制逻辑和第二预取控制逻辑之一作为候选特征,当候选特征不活动时,在非活动采样周期捕获处理器的性能度量, 当候选特征处于活动状态时,在活动采样周期捕获处理器的性能度量,比较处于活动和非活动采样周期的处理器的性能度量,并且将候选特征的状态设置为使能时的性能度量 活动期间表示在非活动期间的性能指标改善,当非活动期间的性能指标表示改善了活动期间的绩效指标时被禁用。

    MOBILE MEMORY CACHE READ OPTIMIZATION
    66.
    发明申请
    MOBILE MEMORY CACHE READ OPTIMIZATION 有权
    移动内存缓存读取优化

    公开(公告)号:US20140006719A1

    公开(公告)日:2014-01-02

    申请号:US14020527

    申请日:2013-09-06

    IPC分类号: G06F12/08

    摘要: Examples of enabling cache read optimization for mobile memory devices are described. One or more access commands may be received, from a host, at a memory device. The one or more access commands may instruct the memory device to access at least two data blocks. The memory device may generate pre-fetch information for the at least two data blocks based at least in part on an order of accessing the at least two data blocks.

    摘要翻译: 描述了为移动存储设备启用高速缓存读取优化的示例。 可以从主机在存储设备处接收一个或多个访问命令。 一个或多个访问命令可以指示存储器设备访问至少两个数据块。 至少部分地基于访问至少两个数据块的顺序,存储器设备可以为至少两个数据块生成预取信息。

    Methods and systems for read ahead of remote data
    68.
    发明授权
    Methods and systems for read ahead of remote data 有权
    远程数据读取的方法和系统

    公开(公告)号:US08612374B1

    公开(公告)日:2013-12-17

    申请号:US12623579

    申请日:2009-11-23

    IPC分类号: G06F1/00 G06F15/18

    摘要: A method, computer readable, and apparatus for read-ahead prediction of subsequent requests to send data between a client coupled to a server via a network includes receiving at a traffic management device a request for a part of at least one of a data file and metadata. The traffic management device selects from two or more of a sequential prediction engine, an expert prediction engine and a learning prediction engine to predict a read-ahead of the at least one of the data file and metadata. One or more additional read-ahead parts of the at least one of the data file and metadata are determined with the traffic management device based on the selecting.

    摘要翻译: 一种用于对在经由网络耦合到服务器的客户端之间发送数据的后续请求进行预读预测的方法,计算机可读和装置包括在流量管理装置处接收对数据文件和/或数据文件中的至少一个的一部分的请求, 元数据。 流量管理装置从顺序预测引擎,专家预测引擎和学习预测引擎中的两个或多个中选择预测数据文件和元数据中的至少一个的预读。 数据文件和元数据中的至少一个数据文件和元数据的一个或多个附加的预读部分基于选择来确定流量管理装置。

    FILTERING PRE-FETCH REQUESTS TO REDUCE PRE-FETCHING OVERHEAD
    70.
    发明申请
    FILTERING PRE-FETCH REQUESTS TO REDUCE PRE-FETCHING OVERHEAD 有权
    过滤预热要求,以减少超前预热

    公开(公告)号:US20130246708A1

    公开(公告)日:2013-09-19

    申请号:US13421014

    申请日:2012-03-15

    IPC分类号: G06F12/08

    摘要: The disclosed embodiments provide a system that filters pre-fetch requests to reduce pre-fetching overhead. During operation, the system executes an instruction that involves a memory reference that is directed to a cache line in a cache. Upon determining that the memory reference will miss in the cache, the system determines whether the instruction frequently leads to cache misses. If so, the system issues a pre-fetch request for one or more additional cache lines. Otherwise, no pre-fetch request is sent. Filtering pre-fetch requests based on instructions' likelihood to miss reduces pre-fetching overhead while preserving the performance benefits of pre-fetching.

    摘要翻译: 所公开的实施例提供了一种过滤预取请求以减少预取开销的系统。 在操作期间,系统执行涉及指向高速缓存中的高速缓存行的存储器引用的指令。 在确定存储器引用将在高速缓存中丢失时,系统确定指令是否经常导致高速缓存未命中。 如果是这样,系统会为一个或多个其他高速缓存行发出预取请求。 否则,不会发送预取请求。 根据指令遗漏的可能性过滤预取请求会降低预取开销,同时保留预取的性能优势。