Broadcast scope selection in a data processing system utilizing a memory topology data structure

    公开(公告)号:US11748280B2

    公开(公告)日:2023-09-05

    申请号:US17394117

    申请日:2021-08-04

    CPC classification number: G06F13/1668 G06F13/4027

    Abstract: A coherent data processing system includes a system fabric communicatively coupling a plurality of nodes arranged in a plurality of groups. A plurality of coherence agents are distributed among the nodes and are assigned responsibility for certain addresses. A topology data structure indicates by group and node differing physical locations within the data processing system of the plurality of coherence agents. A master accesses the topology data structure utilizing a request address to obtain a particular group and node of a particular coherence agent uniquely assigned the request address. The master initially issues, on the system fabric, a memory access request specifying the request address and utilizing a remote scope of broadcast that includes the particular node and excludes at least one other node in the particular group, where the particular node is a different one of the plurality of nodes than a home node containing the master.

    Targeting of lateral castouts in a data processing system

    公开(公告)号:US11561900B1

    公开(公告)日:2023-01-24

    申请号:US17394153

    申请日:2021-08-04

    Abstract: A data processing system includes system memory and a plurality of processor cores each supported by a respective one of a plurality of vertical cache hierarchies. A first vertical cache hierarchy records information indicating communication of cache lines between the first vertical cache hierarchy and others of the plurality of vertical cache hierarchies. Based on selection of a victim cache line for eviction, the first vertical cache hierarchy determines, based on the recorded information, whether to perform a lateral castout of the victim cache line to another of the plurality of vertical cache hierarchies rather than to system memory and selects, based on the recorded information, a second vertical cache hierarchy among the plurality of vertical cache hierarchies as a recipient of the victim cache line via a lateral castout. Based on the determination, the first vertical cache hierarchy performs a castout of the victim cache line.

    Completion logic performing early commitment of a store-conditional access based on a flag

    公开(公告)号:US11281582B2

    公开(公告)日:2022-03-22

    申请号:US16742380

    申请日:2020-01-14

    Abstract: A data processing system includes multiple processing units all having access to a shared memory system. A processing unit includes a lower level cache configured to serve as a point of systemwide coherency and a processor core coupled to the lower level cache. The processor core includes an upper level cache, an execution unit that executes a store-conditional instruction to generate a store-conditional request that specifies a store target address and store data, and a flag that, when set, indicates the store-conditional request can be completed early in the processor core. The processor core also includes completion logic configured to commit an update of the shared memory system with the store data specified by the store-conditional request based on whether the flag is set.

    Cache snooping mode extending coherence protection for certain requests

    公开(公告)号:US11157409B2

    公开(公告)日:2021-10-26

    申请号:US16717868

    申请日:2019-12-17

    Abstract: A cache memory includes a data array, a directory of contents of the data array that specifies coherence state information, and snoop logic that processes operations snooped from a system fabric by reference to the data array and the directory. The snoop logic, responsive to snooping on the system fabric a request of a first flush/clean memory access operation that specifies a target address, determines whether or not the cache memory has coherence ownership of the target address. Based on determining the cache memory has coherence ownership of the target address, the snoop logic services the request and thereafter enters a referee mode. While in the referee mode, the snoop logic protects a memory block identified by the target address against conflicting memory access requests by the plurality of processor cores until conclusion of a second flush/clean memory access operation that specifies the target address.

    Synchronizing access to shared memory by extending protection for a target address of a store-conditional request

    公开(公告)号:US11106608B1

    公开(公告)日:2021-08-31

    申请号:US16908272

    申请日:2020-06-22

    Abstract: A processing unit includes a processor core that executes a store-conditional instruction that generates a store-conditional request specifying a store target address. The processing unit further includes a reservation register that records shared memory addresses for which the processor core has obtained reservations and a cache that services the store-conditional request by conditionally updating the shared memory with the store data based on the reservation register indicating a reservation for the store target address. The cache includes a blocking state machine configured to protect the store target address against access by any conflicting memory access request snooped on a system interconnect during a protection window extension following servicing of the store-conditional request. The cache is configured to vary a duration of the protection window extension for different snooped memory access requests based on one of broadcast scopes and the relative locations of masters of the snooped memory access requests.

    Ordering execution of an interrupt handler

    公开(公告)号:US10831660B1

    公开(公告)日:2020-11-10

    申请号:US16455340

    申请日:2019-06-27

    Abstract: A processing unit for a multiprocessor data processing system includes a processor core having an upper level cache and a lower level cache coupled to the processor core. The lower level cache includes one or more state machines for handling requests snooped from the system interconnect. The processing unit includes an interrupt unit configured to, based on receipt of an interrupt request while the processor core is in a powered up state, record which of the one or more state machines are active processing a prior snooped request that can invalidate a cache line in the upper level cache and present an interrupt to the processor core based on determining that each state machine that was active processing a prior snooped request that can invalidate a cache line in the upper level cache has completed processing of its respective prior snooped request.

    Selectively updating a coherence state in response to a storage update

    公开(公告)号:US10691599B1

    公开(公告)日:2020-06-23

    申请号:US16226018

    申请日:2018-12-19

    Abstract: A data processing system includes a processor core and a cache memory storing a cache line associated with a coherence state field set to a first of multiple modified coherence states. The processor core executes a store instruction including a field having a setting that indicates a coherence state update policy and, based on the store instruction, generates a corresponding store request including the setting, store data, and a target address. Responsive to the store request, the cache memory updates data of the cache line utilizing the store data. The cache memory refrains from updating the coherence state field based on the setting indicating a first coherence state update policy and updates the coherence state field from the first modified coherence state to a second modified coherence state based on the setting indicating a second coherence state update policy.

Patent Agency Ranking