Combining write buffer with dynamically adjustable flush metrics
    1.
    发明授权
    Combining write buffer with dynamically adjustable flush metrics 有权
    将写入缓冲区与动态可调整的flush指标相结合

    公开(公告)号:US08352685B2

    公开(公告)日:2013-01-08

    申请号:US12860505

    申请日:2010-08-20

    CPC classification number: G06F12/0891 G06F12/0804

    Abstract: In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness.

    Abstract translation: 在一个实施例中,组合写缓冲器被配置为维护一个或多个刷新度量以确定何时从缓冲器条目发送写入操作。 组合写缓冲器可以被配置为响应于写缓冲器中的活动来动态地修改刷新度量,修改写操作从写缓冲器发送到下一较低级存储器的条件。 例如,在一个实现中,刷新度量可以包括将写缓冲器条目分类为折叠。 折叠的写缓冲器条目及其中的折叠写入操作可以包括至少一个写入操作,该写入操作已经覆盖由缓冲器条目中的先前写入操作写入的数据。 在另一实现中,组合写缓冲器可以将缓冲器充满度的阈值保持为刷新度量,并且可以基于实际的缓冲器充满度随时间调整缓冲器充满度。

    Latency reduction for cache coherent bus-based cache
    2.
    发明授权
    Latency reduction for cache coherent bus-based cache 有权
    缓存相关总线缓存的延迟降低

    公开(公告)号:US08347040B2

    公开(公告)日:2013-01-01

    申请号:US13089050

    申请日:2011-04-18

    CPC classification number: G06F12/0831 G06F12/084

    Abstract: In one embodiment, a system comprises a plurality of agents coupled to an interconnect and a cache coupled to the interconnect. The plurality of agents are configured to cache data. A first agent of the plurality of agents is configured to initiate a transaction on the interconnect by transmitting a memory request, and other agents of the plurality of agents are configured to snoop the memory request from the interconnect. The other agents provide a response in a response phase of the transaction on the interconnect. The cache is configured to detect a hit for the memory request and to provide data for the transaction to the first agent prior to the response phase and independent of the response.

    Abstract translation: 在一个实施例中,系统包括耦合到互连的多个代理和耦合到互连的高速缓存。 多个代理被配置为高速缓存数据。 多个代理的第一代理被配置为通过发送存储器请求来在互连上发起事务,并且多个代理中的其他代理被配置为从互连窥探存储器请求。 其他代理在交互的响应阶段提供响应。 高速缓存被配置为检测存储器请求的命中,并且在响应阶段之前将事务的数据提供给第一代理,并且独立于响应。

    Data cache block zero implementation
    3.
    发明授权
    Data cache block zero implementation 有权
    数据缓存块零实现

    公开(公告)号:US08301843B2

    公开(公告)日:2012-10-30

    申请号:US12650075

    申请日:2009-12-30

    Abstract: In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.

    Abstract translation: 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。

    Combining Write Buffer with Dynamically Adjustable Flush Metrics
    4.
    发明申请
    Combining Write Buffer with Dynamically Adjustable Flush Metrics 有权
    将写入缓冲区与动态调整冲洗指标相结合

    公开(公告)号:US20120047332A1

    公开(公告)日:2012-02-23

    申请号:US12860505

    申请日:2010-08-20

    CPC classification number: G06F12/0891 G06F12/0804

    Abstract: In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness.

    Abstract translation: 在一个实施例中,组合写缓冲器被配置为维护一个或多个刷新度量以确定何时从缓冲器条目发送写入操作。 组合写缓冲器可以被配置为响应于写缓冲器中的活动来动态地修改刷新度量,修改写操作从写缓冲器发送到下一较低级存储器的条件。 例如,在一个实现中,刷新度量可以包括将写缓冲器条目分类为“折叠”。折叠的写入缓冲器条目及其中的折叠的写入操作可以包括至少一个写入操作,该写入操作已覆盖由 以前的写入操作在缓冲区条目中。 在另一实现中,组合写缓冲器可以将缓冲器充满度的阈值保持为刷新度量,并且可以基于实际的缓冲器充满度随时间调整缓冲器充满度。

    Retry mechanism
    5.
    发明授权
    Retry mechanism 有权
    重试机制

    公开(公告)号:US07991928B2

    公开(公告)日:2011-08-02

    申请号:US12408410

    申请日:2009-03-20

    Abstract: An interface unit may comprise a buffer configured to store requests that are to be transmitted on an interconnect and a control unit coupled to the buffer. In one embodiment, the control unit is coupled to receive a retry response from the interconnect during a response phase of a first transaction for a first request stored in the buffer. The control unit is configured to record an identifier supplied on the interconnect with the retry response that identifies a second transaction that is in progress on the interconnect. The control unit is configured to inhibit reinitiation of the first transaction at least until detecting a second transmission of the identifier. In another embodiment, the control unit is configured to assert a retry response during a response phase of a first transaction responsive to a snoop hit of the first transaction on a first request stored in the buffer for which a second transaction is in progress on the interconnect. The control unit is further configured to provide an identifier of the second transaction with the retry response.

    Abstract translation: 接口单元可以包括被配置为存储要在互连上发送的请求的缓冲器和耦合到缓冲器的控制单元。 在一个实施例中,控制单元被耦合以在对于存储在缓冲器中的第一请求的第一事务的响应阶段期间从互连接收重试响应。 控制单元被配置为记录在互连上提供的标识符,该重试响应标识互连上正在进行的第二事务。 控制单元被配置为至少在检测到标识符的第二次传输之前禁止第一事务的重新发起。 在另一个实施例中,控制单元被配置为在第一事务的响应阶段响应第一事务的窥探命中在存储在第二事务在互连上的第二事务的缓冲器中的第一请求时断言重试响应 。 控制单元还被配置为提供具有重试响应的第二事务的标识符。

    Non-blocking Address Switch with Shallow Per Agent Queues
    6.
    发明申请
    Non-blocking Address Switch with Shallow Per Agent Queues 有权
    非阻塞地址交换机与每个代理队列相邻

    公开(公告)号:US20100235675A1

    公开(公告)日:2010-09-16

    申请号:US12787865

    申请日:2010-05-26

    CPC classification number: G06F13/362 G06F13/4022

    Abstract: In one embodiment, a switch is configured to be coupled to an interconnect. The switch comprises a plurality of storage locations and an arbiter control circuit coupled to the plurality of storage locations. The plurality of storage locations are configured to store a plurality of requests transmitted by a plurality of agents. The arbiter control circuit is configured to arbitrate among the plurality of requests stored in the plurality of storage locations. A selected request is the winner of the arbitration, and the switch is configured to transmit the selected request from one of the plurality of storage locations onto the interconnect. In another embodiment, a system comprises a plurality of agents, an interconnect, and the switch coupled to the plurality of agents and the interconnect. In another embodiment, a method is contemplated.

    Abstract translation: 在一个实施例中,开关被配置为耦合到互连。 开关包括多个存储位置和耦合到多个存储位置的仲裁器控制电路。 多个存储位置被配置为存储由多个代理发送的多个请求。 仲裁器控制电路被配置为在存储在多个存储位置中的多个请求之间进行仲裁。 所选择的请求是仲裁的赢家,并且交换机被配置为将所选择的请求从多个存储位置之一发送到互连上。 在另一个实施例中,系统包括多个代理,互连和耦合到多个代理和互连的开关。 在另一个实施例中,预期了一种方法。

    Data cache block zero implementation
    7.
    发明授权
    Data cache block zero implementation 有权
    数据缓存块零实现

    公开(公告)号:US07707361B2

    公开(公告)日:2010-04-27

    申请号:US11281840

    申请日:2005-11-17

    Abstract: In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.

    Abstract translation: 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。

    Latency Reduction for Cache Coherent Bus-Based Cache
    8.
    发明申请
    Latency Reduction for Cache Coherent Bus-Based Cache 有权
    缓存相干总线缓存的延迟降低

    公开(公告)号:US20080307168A1

    公开(公告)日:2008-12-11

    申请号:US11758219

    申请日:2007-06-05

    CPC classification number: G06F12/0831 G06F12/084

    Abstract: In one embodiment, a system comprises a plurality of agents coupled to an interconnect and a cache coupled to the interconnect. The plurality of agents are configured to cache data. A first agent of the plurality of agents is configured to initiate a transaction on the interconnect by transmitting a memory request, and other agents of the plurality of agents are configured to snoop the memory request from the interconnect. The other agents provide a response in a response phase of the transaction on the interconnect. The cache is configured to detect a hit for the memory request and to provide data for the transaction to the first agent prior to the response phase and independent of the response.

    Abstract translation: 在一个实施例中,系统包括耦合到互连的多个代理和耦合到互连的高速缓存。 多个代理被配置为高速缓存数据。 多个代理的第一代理被配置为通过发送存储器请求来在互连上发起事务,并且多个代理中的其他代理被配置为从互连窥探存储器请求。 其他代理在交互的响应阶段提供响应。 高速缓存被配置为检测存储器请求的命中,并且在响应阶段之前将事务的数据提供给第一代理,并且独立于响应。

    Store Handling in a Processor
    9.
    发明申请
    Store Handling in a Processor 有权
    处理器中的商店处理

    公开(公告)号:US20080307166A1

    公开(公告)日:2008-12-11

    申请号:US11758303

    申请日:2007-06-05

    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.

    Abstract translation: 在一个实施例中,处理器可以被配置为将ECC粒度存储写入数据高速缓存,而非ECC粒度存储可以与存储器请求缓冲器中的高速缓存数据合并。 在一个实施例中,处理器可以被配置为检测受害者块回写命中存储器请求缓冲器中的一个或多个存储器(或反之亦然),并且可以将受害者块回写转换为填充。 在一个实施例中,处理器可以推测性地发出来自加载/存储队列的负载后的存储,但是响应于负载上的窥探命中而阻止对存储的更新。

    Non-blocking address switch with shallow per agent queues
    10.
    发明授权
    Non-blocking address switch with shallow per agent queues 有权
    非阻塞地址切换,每个代理队列较浅

    公开(公告)号:US07461190B2

    公开(公告)日:2008-12-02

    申请号:US11201581

    申请日:2005-08-11

    CPC classification number: G06F13/362 G06F13/4022

    Abstract: In one embodiment, a switch is configured to be coupled to an interconnect. The switch comprises a plurality of storage locations and an arbiter control circuit coupled to the plurality of storage locations. The plurality of storage locations are configured to store a plurality of requests transmitted by a plurality of agents. The arbiter control circuit is configured to arbitrate among the plurality of requests stored in the plurality of storage locations. A selected request is the winner of the arbitration, and the switch is configured to transmit the selected request from one of the plurality of storage locations onto the interconnect. In another embodiment, a system comprises a plurality of agents, an interconnect, and the switch coupled to the plurality of agents and the interconnect. In another embodiment, a method is contemplated.

    Abstract translation: 在一个实施例中,开关被配置为耦合到互连。 开关包括多个存储位置和耦合到多个存储位置的仲裁器控制电路。 多个存储位置被配置为存储由多个代理发送的多个请求。 仲裁器控制电路被配置为在存储在多个存储位置中的多个请求之间进行仲裁。 所选择的请求是仲裁的赢家,并且交换机被配置为将所选择的请求从多个存储位置之一发送到互连上。 在另一个实施例中,系统包括多个代理,互连和耦合到多个代理和互连的开关。 在另一个实施例中,预期了一种方法。

Patent Agency Ranking