Abstract:
Maintaining store order with high throughput in a distributed shared memory system. A request is received for a first ordered data store and a coherency check is initiated. A signal is sent that pipelining of a second ordered data store can be initiated. If a delay condition is encountered during the coherency check for the first ordered data store, rejection of the first ordered data store is signaled. If a delay condition is not encountered during the coherency check for the first ordered data store, a signal is sent indicating a readiness to continue pipelining of the second ordered data store.
Abstract:
A computing device is provided and includes a plurality of nodes. Each node includes multiple chips and a node controller at which the multiple chips are assignable to logical partitions. Each of the multiple chips includes processors and a memory unit configured to handle local memory operations originating from the processors. The node controller includes a dynamic memory relocation (DMR) mechanism configured to move data having a DMR storage increment address relative to a local one of the memory units without interrupting a processing of the data by at least one of the logical partitions. During movement of the data by the DMR mechanism, the memory units are disabled from handling the local memory operations matching the DMR storage increment address and the node controller handles the local memory operations matching the DMR storage increment address.
Abstract:
Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache.
Abstract:
Embodiments relate to matrix and compression-based error detection. An aspect includes summing, by each of a first plurality of summing modules of a first compressor, a respective row of a matrix, the matrix comprising a plurality of rows and a plurality of columns of output bits of a circuit under test wherein each output bit of the circuit under test comprises an element of the matrix, and is a member of a row of a column that is orthogonal to the row. Another aspect includes summing, by each of a second plurality of summing modules of a second compressor, a respective column of output bits of the matrix. Yet another aspect includes determining a presence of an error in the circuit under test based at least one of an output of the first compressor and an output of the second compressor.
Abstract:
A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.
Abstract:
A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes.
Abstract:
In an approach for backing up designated data located in a cache, data stored within an index of a cache is identified, wherein the data has an associated designation indicating that the data is applicable to be backed up to a higher level memory. It is determined that the data stored to the cache has been updated. A status associated with the data is adjusted, such that the adjusted status indicates that the data stored to the cache has not been changed. A copy of the data is created. The copy of the data is stored to the higher level memory.
Abstract:
A multi-boundary address protection range is provided to prevent key operations from interfering with a data move performed by a dynamic memory relocation (DMR) move operation. Any key operation address that is within the move boundary address range gets rejected back to the hypervisor. Further, logic exists across a set of parallel slices to synchronize the DMR move operation as it crosses a protected boundary address range.
Abstract:
In one embodiment, a computer-implemented method includes instructing two or more processors that are operating in a normal state of a symmetric multiprocessing (SMP) network to transition from the normal state to a slow state. The two or more processors reduce their frequencies to respective target frequencies in a transitional state when transitioning from the normal state to the slow state. It is determined that the two or more processors have achieved their respective target frequencies for the slow state. The slow state is entered, responsive to this determination. Responsive to entering the slow state, a first processor of the two or more processors is instructed to send empty packets across an interconnect to compensate for a first greatest potential rate differential between the first processor and a remainder of the two or more processors during the slow state.
Abstract:
A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes.