CACHE SNOOPING MODE EXTENDING COHERENCE PROTECTION FOR CERTAIN REQUESTS

    公开(公告)号:US20210182197A1

    公开(公告)日:2021-06-17

    申请号:US16717835

    申请日:2019-12-17

    摘要: A cache memory includes a data array, a directory of contents of the data array that specifies coherence state information, and snoop logic that processes operations snooped from a system fabric by reference to the data array and the directory. The snoop logic, responsive to snooping on the system fabric a request of a flush or clean memory access operation of an initiating coherence participant, determines whether the directory indicates the cache memory has coherence ownership of a target address of the request. Based on determining the directory indicates the cache memory has coherence ownership of the target address, the snoop logic provides a coherence response to the request that causes coherence ownership of the target address to be transferred to the initiating coherence participant, such that the initiating coherence participant can protect the target address against conflicting requests.

    SYNCHRONIZED ACCESS TO DATA IN SHARED MEMORY BY PROTECTING THE LOAD TARGET ADDRESS OF A FRONTING LOAD

    公开(公告)号:US20200183696A1

    公开(公告)日:2020-06-11

    申请号:US16216659

    申请日:2018-12-11

    摘要: A data processing system includes multiple processing units all having access to a shared memory. A processing unit of the data processing system includes a processor core including an upper level cache, core reservation logic that records addresses in the shared memory for which the processor core has obtained reservations, and an execution unit that executes memory access instructions including a fronting load instruction. Execution of the fronting load instruction generates a load request that specifies a load target address. The processing unit further includes lower level cache that, responsive to receipt of the load request and based on the load request indicating an address match for the load target address in the core reservation logic, protects the load target address against access by any conflicting memory access request during a protection interval following servicing of the load request.

    SYNCHRONIZED ACCESS TO DATA IN SHARED MEMORY BY PROTECTING THE LOAD TARGET ADDRESS OF A FRONTING LOAD

    公开(公告)号:US20200034146A1

    公开(公告)日:2020-01-30

    申请号:US16048884

    申请日:2018-07-30

    IPC分类号: G06F9/30

    摘要: A data processing system includes multiple processing units all having access to a shared memory. A processing unit includes a processor core that executes memory access instructions including a fronting load instruction, wherein execution of the fronting load instruction generates a load request that specifies a load target address. The processing unit also includes reservation logic that records addresses in the shared memory for which the processor core has obtained reservations. In addition, the processing unit includes a read-claim state machine that, responsive to receipt of the load request and based on an address match for the load target address in the reservation logic, protects the load target address against access by any conflicting memory access request during a protection interval following servicing of the load request.

    COHERENCE PROTOCOL PROVIDING SPECULATIVE COHERENCE RESPONSE TO DIRECTORY PROBE

    公开(公告)号:US20190188138A1

    公开(公告)日:2019-06-20

    申请号:US15846392

    申请日:2017-12-19

    IPC分类号: G06F12/0831 G06F13/16

    摘要: A data processing system includes first and second processing nodes and response logic coupled by an interconnect fabric. A first coherence participant in the first processing node is configured to issue a memory access request specifying a target memory block, and a second coherence participant in the second processing node is configured to issue a probe request regarding a memory region tracked in a memory coherence directory. The first coherence participant is configured to, responsive to receiving the probe request after the memory access request and before receiving a systemwide coherence response for the memory access request, detect an address collision between the probe request and the memory access request and, responsive thereto, transmit a speculative coherence response. The response logic is configured to, responsive to the speculative coherence response, provide a systemwide coherence response for the probe request that prevents the probe request from succeeding.

    MULTICOPY ATOMIC STORE OPERATION IN A DATA PROCESSING SYSTEM

    公开(公告)号:US20180349138A1

    公开(公告)日:2018-12-06

    申请号:US15825387

    申请日:2017-11-29

    IPC分类号: G06F9/30 G06F12/0875

    摘要: A data processing system implementing a weak memory model includes a plurality of processing units coupled to an interconnect fabric. In response execution of a multicopy atomic store instruction, an initiating processing unit broadcasts a store request on the interconnect fabric to obtain coherence ownership of a target cache line. The initiating processing unit posts a kill request to at least one of the plurality of processing units to request invalidation of a copy of the target cache line. In response to successful posting of the kill request, the initiating processing unit broadcasts a store complete request on the interconnect fabric to enforce completion of the invalidation of the copy of the target cache line. In response to the store complete request receiving a coherence response indicating success, the initiating processing unit permits an update to the target cache line requested by the multicopy atomic store instruction to be atomically visible.

    MEMORY MOVE SUPPORTING SPECULATIVE ACQUISITION OF SOURCE AND DESTINATION DATA GRANULES

    公开(公告)号:US20180052788A1

    公开(公告)日:2018-02-22

    申请号:US15243601

    申请日:2016-08-22

    摘要: In a data processing system implementing a weak memory model, a lower level cache receives, from a processor core, a plurality of copy-type requests and a plurality of paste-type requests that together indicate a memory move to be performed. The lower level cache also receives, from the processor core, a barrier request that requests enforcement of ordering of memory access requests prior to the barrier request with respect to memory access requests after the barrier request. Prior to completion of processing of the barrier request by the lower level cache, the lower level cache speculatively issues a request on the interconnect fabric to obtain a copy of a data granule specified by a memory access request among the pluralities of requests that follows the barrier request in program order.