PRE-FETCH CHAINING
    1.
    发明申请
    PRE-FETCH CHAINING 有权
    预充电链

    公开(公告)号:US20150199275A1

    公开(公告)日:2015-07-16

    申请号:US14325343

    申请日:2014-07-07

    CPC classification number: G06F12/0862 G06F12/10 G06F2212/6022

    Abstract: According to one general aspect, an apparatus may include a cache pre-fetcher, and a pre-fetch scheduler. The cache pre-fetcher may be configured to predict, based at least in part upon a virtual address, data to be retrieved from a memory system. The pre-fetch scheduler may be configured to convert the virtual address of the data to a physical address of the data, and request the data from one of a plurality of levels of the memory system. The memory system may include a plurality of levels, each level of the memory system configured to store data.

    Abstract translation: 根据一个一般方面,设备可以包括高速缓存预取器和预取调度器。 高速缓存预取器可以被配置为至少部分地基于虚拟地址预测要从存储器系统检索的数据。 预取调度器可以被配置为将数据的虚拟地址转换为数据的物理地址,并且从存储器系统的多个级别之一请求数据。 存储器系统可以包括多个级别,存储器系统的每个级别被配置为存储数据。

    MEMORY LOAD TO LOAD FUSING
    2.
    发明申请

    公开(公告)号:US20190278603A1

    公开(公告)日:2019-09-12

    申请号:US16421463

    申请日:2019-05-23

    Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.

    MEMORY LOAD TO LOAD FUSING
    4.
    发明申请

    公开(公告)号:US20180267800A1

    公开(公告)日:2018-09-20

    申请号:US15615811

    申请日:2017-06-06

    Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.

    EFFICIENT FILL-BUFFER DATA FORWARDING SUPPORTING HIGH FREQUENCIES
    5.
    发明申请
    EFFICIENT FILL-BUFFER DATA FORWARDING SUPPORTING HIGH FREQUENCIES 有权
    高效的缓冲数据,支持高频率

    公开(公告)号:US20150186292A1

    公开(公告)日:2015-07-02

    申请号:US14337211

    申请日:2014-07-21

    Abstract: A Fill Buffer (FB) based data forwarding scheme that stores a combination of Virtual Address (VA), TLB (Translation Look-aside Buffer) entry# or an indication of a location of a Page Table Entry (PTE) in the TLB, and a TLB page size information in the FB and uses these values to expedite FB forwarding. Load (Ld) operations send their non-translated VA for an early comparison against the VA entries in the FB, and are then further qualified with the TLB entry# to determine a “hit.” This hit determination is fast and enables FB forwarding at higher frequencies without waiting for a comparison of Physical Addresses (PA) to conclude in the FB. A safety mechanism may detect a false hit in the FB and generate a late load cancel indication to cancel the earlier-started FB forwarding by ignoring the data obtained as a result of the Ld execution. The Ld is then re-executed later and tries to complete successfully with the correct data.

    Abstract translation: 一种基于填充缓冲器(FB)的数据转发方案,其存储虚拟地址(VA),TLB(翻译后备缓冲区)条目#或页面表项(PTE)在TLB中的位置的指示的组合,以及 FB中的TLB页面大小信息,并使用这些值来加速FB转发。 加载(Ld)操作发送他们的非翻译的VA,以便与FB中的VA条目进行早期比较,然后进一步通过TLB条目#进行限定,以确定“命中”。该命中确定速度很快,可以使FB转发 更高的频率,而不等待物理地址(PA)的比较结束于FB。 安全机制可以检测到FB中的错误命中,并产生一个晚期负载取消指示,以通过忽略由于执行Ld而获得的数据来取消较早启动的FB转发。 然后,Ld稍后重新执行,并尝试使用正确的数据成功完成。

    MEMORY LOAD AND ARITHMETIC LOAD UNIT (ALU) FUSING

    公开(公告)号:US20180267775A1

    公开(公告)日:2018-09-20

    申请号:US15612963

    申请日:2017-06-02

    CPC classification number: G06F7/485 G06F7/38 G06F7/50

    Abstract: According to one general aspect, a load unit may include a load circuit configured to load at least one piece of data from a memory. The load unit may include an alignment circuit configured to align the data to generate an aligned data. The load unit may also include a mathematical operation execution circuit configured to generate a resultant of a predetermined mathematical operation with the at least one piece of data as an operand. Wherein the load unit is configured to, if an active instruction is associated with the predetermined mathematical operation, bypass the alignment circuit and input the piece of data directly to the mathematical operation execution circuit.

    PRE-FETCH CONFIRMATION QUEUE
    7.
    发明申请
    PRE-FETCH CONFIRMATION QUEUE 审中-公开
    PRE-FETCH确认队列

    公开(公告)号:US20150199276A1

    公开(公告)日:2015-07-16

    申请号:US14451375

    申请日:2014-08-04

    CPC classification number: G06F12/0862 G06F2212/6026 G06F2212/6028

    Abstract: According to one general aspect, a method may include receiving, by a pre-fetch unit, a demand to access data stored at a memory address. The method may include determining if a first portion of the memory address matches a prior defined region of memory. The method may further include determining if a second portion of the memory address matches a previously detected pre-fetched address portion. The method may also include, if the first portion of the memory address matches the prior defined region of memory, and the second portion of the memory address matches the previously detected pre-fetched address portion, confirming that a pre-fetch pattern is associated with the memory address.

    Abstract translation: 根据一个一般方面,一种方法可以包括通过预取单元接收访问存储在存储器地址中的数据的需求。 该方法可以包括确定存储器地址的第一部分是否匹配先前定义的存储器区域。 该方法还可以包括确定存储器地址的第二部分是否匹配先前检测到的预取地址部分。 该方法还可以包括,如果存储器地址的第一部分与存储器的先前定义的区域匹配,并且存储器地址的第二部分与先前检测到的预取地址部分匹配,则确认预取模式与 内存地址。

Patent Agency Ranking