SYSTEM AND METHOD FOR CONTROLLING CACHE FLUSH SIZE

    公开(公告)号:US20180032439A1

    公开(公告)日:2018-02-01

    申请号:US15222769

    申请日:2016-07-28

    Abstract: An information handling system may implement a method for controlling cache flush size by limiting the amount of modified cached data in a data cache at any given time. The method may include keeping a count of the number of modified cache lines (or modified cache lines targeted to persistent memory) in the cache, determining that a threshold value for modified cache lines is exceeded and, in response, flushing some or all modified cache lines to persistent memory. The threshold value may represent a maximum number or percentage of modified cache lines. The cache controller may include a field for each cache line indicating whether it targets persistent memory. Limiting the amount of modified cached data at any given time may reduce the number of cache lines to be flushed in response to a power loss event to a number that can be flushed using the available hold-up energy.

    Methods and devices for layered performance matching in memory systems using C-AMAT ratios

    公开(公告)号:US09846646B1

    公开(公告)日:2017-12-19

    申请号:US15239141

    申请日:2016-08-17

    Applicant: C-Memory, LLC

    Abstract: In one embodiment, the present disclosure describes a method of optimizing memory access in a hierarchical memory system. The method includes determining a request rate from an ith layer of the hierarchical memory system for each of n layers in the hierarchical memory system. The method also includes determining a supply rate from an (i+1)th layer of the hierarchical memory system for each of the n layers in the hierarchical memory system. The supply rate from the (i+1)th layer of the hierarchical memory system corresponds to the request rate from the ith layer of the hierarchical memory system. The method further includes adjusting a set of computer architecture parameters of the hierarchical memory system or a schedule associated with an instruction set to utilize heterogeneous computing resources within the hierarchical memory system to match a performance of each adjacent layer of the hierarchical memory system.

    Information coherency maintenance systems and methods

    公开(公告)号:US09824009B2

    公开(公告)日:2017-11-21

    申请号:US13725881

    申请日:2012-12-21

    CPC classification number: G06F12/0811 G06F12/0815 G06F12/0817 G06F12/0897

    Abstract: Systems and methods for coherency maintenance are presented. The systems and methods include utilization of multiple information state tracking approaches or protocols at different memory or storage levels. In one embodiment, a first coherency maintenance approach (e.g., similar to a MESI protocol, etc.) can be implemented at one storage level while a second coherency maintenance approach (e.g., similar to a MOESI protocol, etc.) can be implemented at another storage level. Information at a particular storage level or tier can be tracked by a set of local state indications and a set of essence state indications. The essence state indication can be tracked “externally” from a storage layer or tier directory (e.g., in a directory of another cache level, in a hub between cache levels, etc.). One storage level can control operations based upon the local state indications and another storage level can control operations based in least in part upon an essence state indication.

    CACHING USING AN ADMISSION CONTROL CACHE LAYER

    公开(公告)号:US20170315923A1

    公开(公告)日:2017-11-02

    申请号:US15259074

    申请日:2016-09-08

    Applicant: VMWARE, INC.

    Abstract: Exemplary methods, apparatuses, and systems receive from a client a request to access data from a client. Whether metadata for the data is stored in a first caching layer is determined. In response to the metadata for the data not being stored in the first caching layer, it is determined if the data is stored in the second caching layer. In response to determining that the data is stored in the second caching layer, the data is retrieved from the second caching layer. In response to determining that the data is not stored in the second caching layer, writing of the data to the second caching layer is bypassed. The retrieved data is sent to the client.

Patent Agency Ranking