-
公开(公告)号:US20180052605A1
公开(公告)日:2018-02-22
申请号:US15243385
申请日:2016-08-22
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: DEREK E. WILLIAMS , GUY L. GUTHRIE , SANJEEV GHAI , WILLIAM J. STARKE
IPC: G06F3/06 , G06F12/0897 , G06F12/10 , G06F13/40
CPC classification number: G06F3/065 , G06F3/061 , G06F3/0656 , G06F3/0673 , G06F12/0897 , G06F12/10 , G06F13/4068 , G06F2212/60
Abstract: A data processing system includes a processor core having a store-through upper level cache and a store-in lower level cache. In response to a first instruction, the processor core generates a copy-type request and transmits the copy-type request to the lower level cache, where the copy-type request specifies a source real address. In response to a second instruction, the processor core generates a paste-type request and transmits the paste-type request to the lower level cache, where the paste-type request specifies a destination real address. In response to receipt of the copy-type request, the lower level cache copies a data granule from a storage location specified by the source real address into a non-architected buffer, and in response to receipt of the paste-type request, the lower level cache writes the data granule from the non-architected buffer to a storage location specified by the destination real address.
-
公开(公告)号:US20180032439A1
公开(公告)日:2018-02-01
申请号:US15222769
申请日:2016-07-28
Applicant: Dell Products L.P.
Inventor: John E. Jenne , Stuart Allen Berke , Vadhiraj Sankaranarayanan
IPC: G06F12/0891 , G06F12/0897
CPC classification number: G06F12/0891 , G06F12/0804 , G06F12/0893 , G06F12/0897 , G06F2212/1044 , G06F2212/60
Abstract: An information handling system may implement a method for controlling cache flush size by limiting the amount of modified cached data in a data cache at any given time. The method may include keeping a count of the number of modified cache lines (or modified cache lines targeted to persistent memory) in the cache, determining that a threshold value for modified cache lines is exceeded and, in response, flushing some or all modified cache lines to persistent memory. The threshold value may represent a maximum number or percentage of modified cache lines. The cache controller may include a field for each cache line indicating whether it targets persistent memory. Limiting the amount of modified cached data at any given time may reduce the number of cache lines to be flushed in response to a power loss event to a number that can be flushed using the available hold-up energy.
-
73.
公开(公告)号:US20180032429A1
公开(公告)日:2018-02-01
申请号:US15224134
申请日:2016-07-29
Applicant: Intel Corporation
Inventor: Min LIU , Zhenlin LUO , George VERGIS , Murugasamy K. NACHIMUTHU , Mohan J. KUMAR , Ross E. ZWISLER
IPC: G06F12/02 , G06F12/0873 , G06F12/0871 , G06F12/084 , G06F12/0842
CPC classification number: G06F12/023 , G06F12/084 , G06F12/0842 , G06F12/0871 , G06F12/0873 , G06F12/0897 , G06F2212/1016 , G06F2212/202 , G06F2212/205 , G06F2212/222 , G06F2212/225 , G06F2212/271 , G06F2212/305 , G06F2212/604
Abstract: A method is described. The method includes recognizing different latencies and/or bandwidths between different levels of a system memory and different memory access requestors of a computing system. The system memory includes the different levels and different technologies. The method also includes allocating each of the memory access requestors with a respective region of the system memory having an appropriate latency and/or bandwidth.
-
公开(公告)号:US20180004668A1
公开(公告)日:2018-01-04
申请号:US15199587
申请日:2016-06-30
Applicant: Intel Corporation
Inventor: Omid J. AZIZI , Alexandre Y. SOLOMATNIKOV , Amin FIROOZSHAHIAN , John P. STEVENSON , Mahesh MADDURY
IPC: G06F12/0846 , G06F12/0804
CPC classification number: G06F12/0804 , G06F3/0608 , G06F3/0641 , G06F3/0689 , G06F12/0864 , G06F12/0868 , G06F12/0871 , G06F12/0897 , G06F2212/1024 , G06F2212/311 , G06F2212/502
Abstract: A searchable hot content cache stores frequently accessed data values in accordance with embodiments. In one embodiment, a circuit includes interface circuitry to receive memory requests from a processor. The circuit includes hardware logic to determine that a number of the memory requests that is to access a value meets or exceeds a threshold. The circuit includes a storage array to store the value in an entry based on a determination that the number meets or exceeds the threshold. In response to receipt of a memory request from the processor to access the same value at a memory address, the hardware logic is to map the memory address to the entry of the storage array.
-
公开(公告)号:US20170371784A1
公开(公告)日:2017-12-28
申请号:US15192542
申请日:2016-06-24
Applicant: Advanced Micro Devices, Inc.
Inventor: Johnathan R. Alsop , Bradford Beckmann
IPC: G06F12/0804 , G06F12/0811 , G06F12/084 , G06F12/0808 , G06F12/0842 , G06F12/0891 , G06F12/0897
CPC classification number: G06F12/0897 , G06F12/0808 , G06F12/0811 , G06F12/0842 , G06F12/0891 , G06F2212/6042
Abstract: A processing system includes one or more first caches and one or more first lock tables associated with the one or more first caches. The processing system also includes one or more processing units that each include a plurality of compute units for concurrently executing work-groups of work items, a plurality of second caches associated with the plurality of compute units and configured in a hierarchy with the one or more first caches, and a plurality of second lock tables associated with the plurality of second caches. The first and second lock tables indicate locking states of addresses of cache lines in the corresponding first and second caches on a per-line basis.
-
76.
公开(公告)号:US09846646B1
公开(公告)日:2017-12-19
申请号:US15239141
申请日:2016-08-17
Applicant: C-Memory, LLC
Inventor: Yu-Hang Liu , Xian-He Sun
IPC: G06F12/00 , G06F12/0815 , G06F12/0811 , G06F11/14 , G06F12/08
CPC classification number: G06F12/0811 , G06F11/1425 , G06F11/34 , G06F12/08 , G06F12/0897 , G06F2212/1016 , G06F2212/601
Abstract: In one embodiment, the present disclosure describes a method of optimizing memory access in a hierarchical memory system. The method includes determining a request rate from an ith layer of the hierarchical memory system for each of n layers in the hierarchical memory system. The method also includes determining a supply rate from an (i+1)th layer of the hierarchical memory system for each of the n layers in the hierarchical memory system. The supply rate from the (i+1)th layer of the hierarchical memory system corresponds to the request rate from the ith layer of the hierarchical memory system. The method further includes adjusting a set of computer architecture parameters of the hierarchical memory system or a schedule associated with an instruction set to utilize heterogeneous computing resources within the hierarchical memory system to match a performance of each adjacent layer of the hierarchical memory system.
-
公开(公告)号:US09824009B2
公开(公告)日:2017-11-21
申请号:US13725881
申请日:2012-12-21
Applicant: NVIDIA Corporation
Inventor: Anurag Chaudhary , Guillermo Juan Rozas
IPC: G06F12/08 , G06F12/0811 , G06F12/0815 , G06F12/0817 , G06F12/0897
CPC classification number: G06F12/0811 , G06F12/0815 , G06F12/0817 , G06F12/0897
Abstract: Systems and methods for coherency maintenance are presented. The systems and methods include utilization of multiple information state tracking approaches or protocols at different memory or storage levels. In one embodiment, a first coherency maintenance approach (e.g., similar to a MESI protocol, etc.) can be implemented at one storage level while a second coherency maintenance approach (e.g., similar to a MOESI protocol, etc.) can be implemented at another storage level. Information at a particular storage level or tier can be tracked by a set of local state indications and a set of essence state indications. The essence state indication can be tracked “externally” from a storage layer or tier directory (e.g., in a directory of another cache level, in a hub between cache levels, etc.). One storage level can control operations based upon the local state indications and another storage level can control operations based in least in part upon an essence state indication.
-
公开(公告)号:US09823854B2
公开(公告)日:2017-11-21
申请号:US15074444
申请日:2016-03-18
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro Oportus Valenzuela , Amin Ansari , Richard Senior , Nieyan Geng , Anand Janakiraman , Gurvinder Singh Chhabra
IPC: G06F3/06 , G06F12/0897
CPC classification number: G06F3/0611 , G06F3/0626 , G06F3/0659 , G06F3/0661 , G06F3/0665 , G06F3/0673 , G06F12/023 , G06F12/0284 , G06F12/0615 , G06F12/0897 , G06F2212/1016 , G06F2212/1024 , G06F2212/1041 , G06F2212/1044 , G06F2212/1056 , G06F2212/152 , G06F2212/401 , G06F2212/65
Abstract: Aspects disclosed relate to a priority-based access of compressed memory lines in a processor-based system. In an aspect, a memory access device in the processor-based system receives a read access request for memory. If the read access request is higher priority, the memory access device uses the logical memory address of the read access request as the physical memory address to access the compressed memory line. However, if the read access request is lower priority, the memory access device translates the logical memory address of the read access request into one or more physical memory addresses in memory space left by the compression of higher priority lines. In this manner, the efficiency of higher priority compressed memory accesses is improved by removing a level of indirection otherwise required to find and access compressed memory lines.
-
公开(公告)号:US20170315923A1
公开(公告)日:2017-11-02
申请号:US15259074
申请日:2016-09-08
Applicant: VMWARE, INC.
Inventor: SANKARAN SIVATHANU , SAI INABATTINI
IPC: G06F12/0897 , G06F12/0888 , G06F9/455
CPC classification number: G06F12/0897 , G06F9/45533 , G06F12/0888 , G06F12/128 , G06F2212/604
Abstract: Exemplary methods, apparatuses, and systems receive from a client a request to access data from a client. Whether metadata for the data is stored in a first caching layer is determined. In response to the metadata for the data not being stored in the first caching layer, it is determined if the data is stored in the second caching layer. In response to determining that the data is stored in the second caching layer, the data is retrieved from the second caching layer. In response to determining that the data is not stored in the second caching layer, writing of the data to the second caching layer is bypassed. The retrieved data is sent to the client.
-
80.
公开(公告)号:US20170315919A1
公开(公告)日:2017-11-02
申请号:US15141030
申请日:2016-04-28
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventor: GUY L. GUTHRIE , HUGH SHEN , DEREK E. WILLIAMS
IPC: G06F12/0893 , G06F12/0891
CPC classification number: G06F12/0893 , G06F9/30 , G06F9/30043 , G06F9/30087 , G06F9/3009 , G06F9/3836 , G06F9/3838 , G06F12/0891 , G06F12/0897 , G06F2212/1024 , G06F2212/60
Abstract: A technique for operating a lower level cache memory of a data processing system includes receiving an operation that is associated with a first thread. Logical partition (LPAR) information for the operation is used to limit dependencies in a dependency data structure of a store queue of the lower level cache memory that are set and to remove dependencies that are otherwise unnecessary.
-
-
-
-
-
-
-
-
-