Abstract:
Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes.
Abstract:
System refresh in a cache memory that includes generating a refresh time period (RTIM) pulse at a centralized refresh controller of the cache memory and activating a refresh request at the centralized refresh controller based on generating the RTIM pulse. The refresh request is associated with a single cache memory bank of the cache memory. A refresh grant is received and transmitted to a bank controller. The bank controller is associated with and localized at the single cache memory bank of the cache memory.
Abstract:
Embodiments are provided for implementing efficient resource allocation maintaining fairness between operation types. Embodiments include receiving an operation, and determining a number of resources that are required for the received operation. Embodiments also include determining a number of available resources, comparing the number of required resources and the number of available resources, and allocating the available resources in a first mode or a second mode based at least in part on the comparison and a number of queued operations.
Abstract:
Processing simultaneous data requests regardless of active request in the same addressable index of a cache. In response to the cache miss in the given congruence, if the number of other compartments in the given congruence class that have an active operation is less than a predetermined threshold, setting a Do Not Cast Out (DNCO) pending indication for each of the compartments that have an active operation in order to block access to each of the other compartments that have active operations and, if the number of other compartments in the given congruence class that have an active operation is not less than a predetermined threshold, blocking another cache miss from occurring in the compartments of the given congruence class by setting a congruence class block pending indication for the given congruence class in order to block access to each of the other compartments of the given congruence class.
Abstract:
Embodiments includes a computer-implemented method, a system and computer-program product for modifying central serialization of requests in multiprocessor systems. Some embodiments includes receiving an operation requiring resources from a pool of resources, determining an availability of the pool of resources required by the operation, and selecting a queue of a plurality of queues to queue the operation based at least in part on the availability of the pool of resources. Some embodiments also include setting a resource needs register and needs register for the selected queue, and setting a take-two bit for the selected queue.
Abstract:
Embodiments are provided for implementing efficient resource allocation maintaining fairness between operation types. Embodiments include receiving an operation, and determining a number of resources that are required for the received operation. Embodiments also include determining a number of available resources, comparing the number of required resources and the number of available resources, and allocating the available resources in a first mode or a second mode based at least in part on the comparison and a number of queued operations.
Abstract:
Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.
Abstract:
An aspect includes interlocking operations in an address-sliced cache system. A computer-implemented method includes determining whether a dynamic memory relocation operation is in process in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is in process, a key operation is serialized to maintain a sequenced order of completion of the key operation across a plurality of slices and pipes in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is not in process, a plurality of key operation requests is allowed to launch across two or more of the slices and pipes in parallel in the address-sliced cache system while ensuring that only one instance of the key operations is in process across all of the slices and pipes at a same time.
Abstract:
Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.
Abstract:
In an approach for taking corrupt portions of cache offline during runtime, a notification of a section of a cache to be taken offline is received, wherein the section includes one or more sets in one or more indexes of the cache. An indication is associated with each set of the one or more sets in a first index of the one or more indexes, wherein the indication marks the respective set as unusable for future operations. Data is purged from the one or more sets in the first index of the cache. Each set of the one or more sets in the first index is marked as invalid.