Converting Victim Writeback to a Fill
    11.
    发明申请
    Converting Victim Writeback to a Fill 有权
    将受害者回填转换成填写

    公开(公告)号:US20110047336A1

    公开(公告)日:2011-02-24

    申请号:US12908535

    申请日:2010-10-20

    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.

    Abstract translation: 在一个实施例中,处理器可以被配置为将ECC粒度存储写入数据高速缓存,而非ECC粒度存储可以与存储器请求缓冲器中的高速缓存数据合并。 在一个实施例中,处理器可以被配置为检测受害者块回写命中存储器请求缓冲器中的一个或多个存储器(或反之亦然),并且可以将受害者块回写转换为填充。 在一个实施例中,处理器可以推测性地发出来自加载/存储队列的负载后的存储,但是响应于负载上的窥探命中而阻止对存储的更新。

    Data cache block zero implementation
    12.
    发明申请
    Data cache block zero implementation 有权
    数据缓存块零实现

    公开(公告)号:US20070113020A1

    公开(公告)日:2007-05-17

    申请号:US11281840

    申请日:2005-11-17

    Abstract: In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.

    Abstract translation: 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。

    Converting victim writeback to a fill
    13.
    发明授权
    Converting victim writeback to a fill 有权
    将受害者回写转换为填充

    公开(公告)号:US07836262B2

    公开(公告)日:2010-11-16

    申请号:US11758275

    申请日:2007-06-05

    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.

    Abstract translation: 在一个实施例中,处理器可以被配置为将ECC粒度存储写入数据高速缓存,而非ECC粒度存储可以与存储器请求缓冲器中的高速缓存数据合并。 在一个实施例中,处理器可以被配置为检测受害者块回写命中存储器请求缓冲器中的一个或多个存储器(或反之亦然),并且可以将受害者块回写转换为填充。 在一个实施例中,处理器可以推测性地发出来自加载/存储队列的负载后的存储,但是响应于负载上的窥探命中而阻止对存储的更新。

    Converting Victim Writeback to a Fill
    14.
    发明申请
    Converting Victim Writeback to a Fill 有权
    将受害者回填转换成填写

    公开(公告)号:US20080307167A1

    公开(公告)日:2008-12-11

    申请号:US11758275

    申请日:2007-06-05

    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.

    Abstract translation: 在一个实施例中,处理器可以被配置为将ECC粒度存储写入数据高速缓存,而非ECC粒度存储可以与存储器请求缓冲器中的高速缓存数据合并。 在一个实施例中,处理器可以被配置为检测受害者块回写命中存储器请求缓冲器中的一个或多个存储器(或反之亦然),并且可以将受害者块回写转换为填充。 在一个实施例中,处理器可以推测性地发出来自加载/存储队列的负载后的存储,但是响应于负载上的窥探命中而阻止对存储的更新。

    Retry mechanism
    15.
    发明授权
    Retry mechanism 有权
    重试机制

    公开(公告)号:US08359414B2

    公开(公告)日:2013-01-22

    申请号:US13165235

    申请日:2011-06-21

    Abstract: An interface unit may comprise a buffer configured to store requests that are to be transmitted on an interconnect and a control unit coupled to the buffer. In one embodiment, the control unit is coupled to receive a retry response from the interconnect during a response phase of a first transaction for a first request stored in the buffer. The control unit is configured to record an identifier supplied on the interconnect with the retry response that identifies a second transaction that is in progress on the interconnect. The control unit is configured to inhibit reinitiation of the first transaction at least until detecting a second transmission of the identifier. In another embodiment, the control unit is configured to assert a retry response during a response phase of a first transaction responsive to a snoop hit of the first transaction on a first request stored in the buffer for which a second transaction is in progress on the interconnect. The control unit is further configured to provide an identifier of the second transaction with the retry response.

    Abstract translation: 接口单元可以包括被配置为存储要在互连上发送的请求的缓冲器和耦合到缓冲器的控制单元。 在一个实施例中,控制单元被耦合以在对于存储在缓冲器中的第一请求的第一事务的响应阶段期间从互连接收重试响应。 控制单元被配置为记录在互连上提供的标识符,该重试响应标识互连上正在进行的第二事务。 控制单元被配置为至少在检测到标识符的第二次传输之前禁止第一事务的重新发起。 在另一个实施例中,控制单元被配置为在第一事务的响应阶段响应第一事务的窥探命中在存储在第二事务在互连上的第二事务的缓冲器中的第一请求时断言重试响应 。 控制单元还被配置为提供具有重试响应的第二事务的标识符。

    Retry mechanism in cache coherent communication among agents
    17.
    发明授权
    Retry mechanism in cache coherent communication among agents 有权
    代理之间缓存一致通信中的重试机制

    公开(公告)号:US07529866B2

    公开(公告)日:2009-05-05

    申请号:US11282037

    申请日:2005-11-17

    Abstract: An interface unit may comprise a buffer configured to store requests that are to be transmitted on an interconnect and a control unit coupled to the buffer. In one embodiment, the control unit is coupled to receive a retry response from the interconnect during a response phase of a first transaction for a first request stored in the buffer. The control unit is configured to record an identifier supplied on the interconnect with the retry response that identifies a second transaction that is in progress on the interconnect. The control unit is configured to inhibit reinitiation of the first transaction at least until detecting a second transmission of the identifier. In another embodiment, the control unit is configured to assert a retry response during a response phase of a first transaction responsive to a snoop hit of the first transaction on a first request stored in the buffer for which a second transaction is in progress on the interconnect. The control unit is further configured to provide an identifier of the second transaction with the retry response.

    Abstract translation: 接口单元可以包括被配置为存储要在互连上发送的请求的缓冲器和耦合到缓冲器的控制单元。 在一个实施例中,控制单元被耦合以在对于存储在缓冲器中的第一请求的第一事务的响应阶段期间从互连接收重试响应。 控制单元被配置为记录在互连上提供的标识符,该重试响应标识互连上正在进行的第二事务。 控制单元被配置为至少在检测到标识符的第二次传输之前禁止第一事务的重新发起。 在另一个实施例中,控制单元被配置为在第一事务的响应阶段响应第一事务的窥探命中在存储在第二事务在互连上的第二事务的缓冲器中的第一请求时断言重试响应 。 控制单元还被配置为提供具有重试响应的第二事务的标识符。

    Replay reduction for power saving
    18.
    发明申请
    Replay reduction for power saving 有权
    节电减重

    公开(公告)号:US20080086622A1

    公开(公告)日:2008-04-10

    申请号:US11546223

    申请日:2006-10-10

    CPC classification number: G06F9/3842

    Abstract: In one embodiment, a processor comprises a scheduler configured to issue a first instruction operation to be executed and an execution core coupled to the scheduler. Configured to execute the first instruction operation, the execution core comprises a plurality of replay sources configured to cause a replay of the first instruction operation responsive to detecting at least one of a plurality of replay cases. The scheduler is configured to inhibit issuance of the first instruction operation subsequent to the replay for a subset of the plurality of replay cases. The scheduler is coupled to receive an acknowledgement indication corresponding to each of the plurality of replay cases in the subset, and is configured to inhibit issuance of the first instruction operation until the acknowledge indication is asserted that corresponds to an identified replay case of the subset.

    Abstract translation: 在一个实施例中,处理器包括被配置为发出要执行的第一指令操作和耦合到调度器的执行核心的调度器。 配置为执行第一指令操作,执行核心包括被配置为响应于检测多个重放情况中的至少一个而使第一指令操作重放的多个重放源。 调度器被配置为禁止在多个重放情况的子集的重放之后发出第一指令操作。 调度器被耦合以接收对应于子集中的多个重播案例中的每一个的确认指示,并且被配置为禁止发出第一指令操作,直到确认对应于所识别的该子集的重放情况为止的确认指示为止。

    Uncacheable load merging
    19.
    发明申请
    Uncacheable load merging 审中-公开
    不可加载的负载合并

    公开(公告)号:US20080086594A1

    公开(公告)日:2008-04-10

    申请号:US11545825

    申请日:2006-10-10

    Abstract: In one embodiment, a processor comprises a buffer and a control unit coupled to the buffer. The buffer is configured to store requests to be transmitted on an interconnect on which the processor is configured to communicate. The buffer is coupled to receive a first uncacheable load request having a first address. The control unit is configured to merge the first uncacheable load request with a second uncacheable load request that is stored in the buffer responsive to a second address of the second load request matching the first address within a granularity. A single transaction on the interconnect is used for both the first and second uncacheable load requests, if merged. Separate transactions on the interconnect are used for each of the first and second uncacheable load requests if not merged.

    Abstract translation: 在一个实施例中,处理器包括耦合到缓冲器的缓冲器和控制单元。 缓冲器被配置为存储要在处理器配置为进行通信的互连上发送的请求。 缓冲器被耦合以接收具有第一地址的第一不可缓存的加载请求。 所述控制单元被配置为将所述第一不可缓存的加载请求与存储在所述缓冲器中的第二不可缓存的加载请求进行合并,所述第二不可​​缓存的加载请求响应于在粒度内与所述第一地址匹配的第二加载请求的第二地址。 如果合并,互连上的单个事务将用于第一个和第二个不可缓存的加载请求。 对于第一和第二不可缓存的加载请求中的每一个,如果不合并,则互连上的单独事务将被使用。

    Combined buffer for snoop, store merging, load miss, and writeback operations
    20.
    发明申请
    Combined buffer for snoop, store merging, load miss, and writeback operations 有权
    组合缓冲区,用于侦听,存储合并,加载错误和回写操作

    公开(公告)号:US20070050564A1

    公开(公告)日:2007-03-01

    申请号:US11215604

    申请日:2005-08-30

    CPC classification number: G06F12/0831

    Abstract: In one embodiment, an interface unit comprises an address buffer and a control unit coupled to the address buffer. The address buffer is configured to store addresses of processor core requests generated by a processor core and addresses of snoop requests received from an interconnect. The control unit is configured to maintain a plurality of queues, wherein at least a first queue of the plurality of queues is dedicated to snoop requests and at least a second queue of the plurality of queues is dedicated to processor core requests. Responsive to a first snoop request received by the interface unit from the interconnect, the control unit is configured to allocate a first address buffer entry of the address buffer to store the first snoop request and to store a first pointer to the first address buffer entry in the first queue. Responsive to a first processor core request received by the interface unit from the processor core, the control unit is configured to allocate a second address buffer entry of the address buffer to store the first processor core request and to store a second pointer to the second address buffer entry in the second queue.

    Abstract translation: 在一个实施例中,接口单元包括地址缓冲器和耦合到地址缓冲器的控制单元。 地址缓冲器被配置为存储由处理器核心产生的处理器核心请求的地址和从互连接​​收的窥探请求的地址。 所述控制单元被配置为维护多个队列,其中所述多个队列中的至少第一队列专用于窥探请求,并且所述多个队列中的至少第二队列专用于处理器核心请求。 响应于接口单元从互连接收到的第一窥探请求,控制单元被配置为分配地址缓冲器的第一地址缓冲器条目以存储第一窥探请求,并且将第一指针存储到第一地址缓冲器条目中 第一个队列。 响应于接口单元从处理器核心接收到的第一处理器核心请求,控制单元被配置为分配地址缓冲器的第二地址缓冲器条目以存储第一处理器核心请求并将第二指针存储到第二地址 第二个队列中的缓冲区条目。

Patent Agency Ranking