-
1.
公开(公告)号:US20190018775A1
公开(公告)日:2019-01-17
申请号:US15651543
申请日:2017-07-17
发明人: Ekaterina M. Ambroladze , Timothy C. Bronson , Matthias Klein , Pak-kin Mak , Vesselina K. Papazova , Robert J. Sonnelitter, III , Lahiruka S. Winter
IPC分类号: G06F12/0831 , G06F13/16
CPC分类号: G06F12/0831 , G06F13/1615 , G06F2212/60 , G06F2212/621
摘要: Embodiments include methods, systems and computer program products method for maintaining ordered memory access with parallel access data streams associated with a distributed shared memory system. The computer-implemented method includes performing, by a first cache, a key check, the key check being associated with a first ordered data store. A first memory node signals that the first memory node is ready to begin pipelining of a second ordered data store into the first memory node to an input/output (I/O) controller. A second cache returns a key response to the first cache indicating that the pipelining of the second ordered data store can proceed. The first memory node sends a ready signal indicating that the first memory node is ready to continue pipelining of the second ordered data store into the first memory node to the I/O controller, wherein the ready signal is triggered by receipt of the key response.
-
公开(公告)号:US20180336134A1
公开(公告)日:2018-11-22
申请号:US15598837
申请日:2017-05-18
发明人: Michael A. Blake , Timothy C. Bronson , Ashraf ElSharif , Kenneth D. Klapproth , Vesselina K. Papazova , Guy G. Tracy
IPC分类号: G06F12/0817 , G06F12/0891
CPC分类号: G06F12/0817 , G06F12/0891 , G06F2212/1024 , G06F2212/60 , G06F2212/621
摘要: Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.
-
公开(公告)号:US20160110288A1
公开(公告)日:2016-04-21
申请号:US14846875
申请日:2015-09-07
IPC分类号: G06F12/08
CPC分类号: G06F12/0815 , G06F12/0813 , G06F12/0835 , G06F12/1416 , G06F2212/314 , G06F2212/50 , G06F2212/621 , G06F2212/622
摘要: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes.
-
公开(公告)号:US20160139831A1
公开(公告)日:2016-05-19
申请号:US14547663
申请日:2014-11-19
发明人: Timothy C. Bronson , Garrett M. Drapala , Mark S. Farrell , Hieu T. Huynh , William J. Lewis , Pak-Kin Mak , Craig R. Walters
CPC分类号: G06F3/0617 , G06F3/065 , G06F3/0664 , G06F3/0683 , G06F12/0802 , G06F12/0811 , G06F12/0837 , G06F12/0842 , G06F2212/1041 , G06F2212/283 , G06F2212/608
摘要: A computing device is provided and includes a first physical memory device, a second physical memory device and a hypervisor configured to assign resources of the first and second physical memory devices to a logical partition. The hypervisor configures a dynamic memory relocation (DMR) mechanism to move entire storage increments currently processed by the logical partition between the first and second physical memory devices in a manner that is substantially transparent to the logical partition.
摘要翻译: 提供了一种计算设备,并且包括第一物理存储器设备,第二物理存储器设备和管理程序,其被配置为将第一和第二物理存储器设备的资源分配给逻辑分区。 管理程序配置动态存储器重定位(DMR)机制以使逻辑分区当前处理的整个存储增量以对逻辑分区基本透明的方式移动在第一和第二物理存储器设备之间。
-
公开(公告)号:US20140258621A1
公开(公告)日:2014-09-11
申请号:US13784958
申请日:2013-03-05
发明人: Timothy C. Bronson , Garrett M. Drapala , Rebecca M. Gott , Pak-Kin Mak , Vijayalakshmi Srinivasan , Craig R. Walters
IPC分类号: G06F12/08
CPC分类号: G06F12/0833 , G06F12/0811 , G06F12/0831 , G06F2212/283 , G06F2212/621
摘要: Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache.
摘要翻译: 实施例涉及用于计算机的对称多处理器(SMP)的非数据包含的一致(NIC)目录。 一个方面包括确定SMP的第一处理器节点的多级高速缓存结构中的最高级缓存的第一逐出条目。 另一方面包括确定NIC目录未满。 另一方面包括确定最高级别高速缓存的第一驱逐条目是由多级缓存结构中的较低级别高速缓存所拥有的。 另一方面包括,基于NIC目录不是完整的,并且基于由较低级别高速缓存所拥有的最高级缓存的第一次驱逐条目,将最高级别高速缓存的第一次驱逐条目的地址安装在 NIC目录中的第一个新条目。 另一方面包括使最高级缓存中的第一个逐出条目无效。
-
公开(公告)号:US20130042144A1
公开(公告)日:2013-02-14
申请号:US13655088
申请日:2012-10-18
CPC分类号: G11C29/08 , G06F11/1666 , G06F11/20 , G11C11/401 , G11C29/883 , G11C2029/0401
摘要: A computer implemented method of embedded dynamic random access memory (EDRAM) macro disablement. The method includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows. Each line of the EDRAM macro is iteratively tested, the testing including attempting at least one write operation at each line of the EDRAM macro. It is determined that an error occurred during the testing. Write perations for an entire row of EDRAM macros associated with the EDRAM macro are disabled based on the determining.
摘要翻译: 嵌入式动态随机存取存储器(EDRAM)宏禁用的计算机实现方法。 该方法包括隔离高速缓存存储体的EDRAM宏,该高速缓存存储体被划分成多个EDRAM宏的至少三行,该EDRAM宏与至少三行之一相关联。 EDRAM宏的每一行被迭代测试,测试包括尝试在EDRAM宏的每一行进行至少一次写入操作。 确定在测试期间发生错误。 基于确定,禁用与EDRAM宏相关联的整行EDRAM宏的写入。
-
7.
公开(公告)号:US20180341587A1
公开(公告)日:2018-11-29
申请号:US15800369
申请日:2017-11-01
发明人: Michael A. Blake , Timothy C. Bronson , Pak-kin Mak , Vesselina K. Papazova , Robert J. Sonnelitter, III
IPC分类号: G06F12/084 , G06F12/0862
摘要: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
-
8.
公开(公告)号:US20180341586A1
公开(公告)日:2018-11-29
申请号:US15606055
申请日:2017-05-26
发明人: Michael A. Blake , Timothy C. Bronson , Pak-kin Mak , Vesselina K. Papazova , Robert J. Sonnelitter, III
IPC分类号: G06F12/084 , G06F12/0862
摘要: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
-
公开(公告)号:US20180285277A1
公开(公告)日:2018-10-04
申请号:US15472610
申请日:2017-03-29
IPC分类号: G06F12/0877
CPC分类号: G06F12/0877 , G06F12/084 , G06F13/18 , G06F2212/1016 , G06F2212/502
摘要: Embodiments of the present invention are directed to hot cache line arbitration. An example of a computer-implemented method for hot cache line arbitration includes detecting, by a processing device, a hot cache line scenario. The computer-implemented method further includes tracking, by the processing device, hot cache line requests from requesters to determine subsequent satisfaction of the requests. The computer-implemented method further includes facilitating, by the processing device, servicing of the requests according to hierarchy of the requestors.
-
公开(公告)号:US09798663B2
公开(公告)日:2017-10-24
申请号:US14846875
申请日:2015-09-07
IPC分类号: G06F12/0815 , G06F12/14 , G06F12/0813 , G06F12/0831
CPC分类号: G06F12/0815 , G06F12/0813 , G06F12/0835 , G06F12/1416 , G06F2212/314 , G06F2212/50 , G06F2212/621 , G06F2212/622
摘要: A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes.
-
-
-
-
-
-
-
-
-