Abstract:
Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes detecting, by a processing device, a hot cache line scenario. The computer-implemented method further includes tracking, by the processing device, hot cache line requests from requesters to determine subsequent satisfaction of the requests. The computer-implemented method further includes facilitating, by the processing device, servicing of the requests according to hierarchy of the requestors.
Abstract:
Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.
Abstract:
Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes.
Abstract:
Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes.
Abstract:
A computing device is provided and includes a first physical memory device, a second physical memory device and a hypervisor configured to assign resources of the first and second physical memory devices to a logical partition. The hypervisor configures a dynamic memory relocation (DMR) mechanism to move entire storage increments currently processed by the logical partition between the first and second physical memory devices in a manner that is substantially transparent to the logical partition.
Abstract:
A computing device is provided and includes a plurality of nodes. Each node includes multiple chips and a node controller at which the multiple chips are assignable to logical partitions. Each of the multiple chips includes processors and a memory unit configured to handle local memory operations originating from the processors. The node controller includes a dynamic memory relocation (DMR) mechanism configured to move data having a DMR storage increment address relative to a local one of the memory units without interrupting a processing of the data by at least one of the logical partitions. During movement of the data by the DMR mechanism, the memory units are disabled from handling the local memory operations matching the DMR storage increment address and the node controller handles the local memory operations matching the DMR storage increment address.
Abstract:
Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache.
Abstract:
Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.