BYPASS PREDICTOR FOR AN EXCLUSIVE LAST-LEVEL CACHE

    公开(公告)号:US20210374064A1

    公开(公告)日:2021-12-02

    申请号:US17402492

    申请日:2021-08-13

    Abstract: A system and a method to allocate data to a first cache increments a first counter if a reuse indicator for the data indicates that the data is likely to be reused and decremented the counter if the reuse indicator for the data indicates that the data is likely not to be reused. A second counter is incremented upon eviction of the data from the second cache, which is a higher level cache than the first cache. The data is allocated to the first cache if the value of the first counter is equal to or greater than the first predetermined threshold or the value of the second counter equals zero, and the data is bypassed from the first cache if the value of the first counter is less than the first predetermined threshold and the value of the second counter is not equal to zero.

    SOLID STATE DRIVE MULTI-CARD ADAPTER WITH INTEGRATED PROCESSING

    公开(公告)号:US20210055889A1

    公开(公告)日:2021-02-25

    申请号:US17088571

    申请日:2020-11-03

    Abstract: Embodiments of the inventive concept include solid state drive (SSD) multi-card adapters that can include multiple solid state drive cards, which can be incorporated into existing enterprise servers without major architectural changes, thereby enabling the server industry ecosystem to easily integrate evolving solid state drive technologies into servers. The SSD multi-card adapters can include an interface section between various solid state drive cards and drive connector types. The interface section can perform protocol translation, packet switching and routing, data encryption, data compression, management information aggregation, virtualization, and other functions.

    SPECULATIVE DRAM READ, IN PARALLEL WITH CACHE LEVEL SEARCH, LEVERAGING INTERCONNECT DIRECTORY

    公开(公告)号:US20200301838A1

    公开(公告)日:2020-09-24

    申请号:US16424452

    申请日:2019-05-28

    Abstract: According to one general aspect, an apparatus may include a processor configured to issue a first request for a piece of data from a cache memory and a second request for the piece of data from a system memory. The apparatus may include the cache memory configured to temporarily store a subset of data. The apparatus may include a memory interconnect. The a memory interconnect may be configured to receive the second request for the piece of data from the system memory. The a memory interconnect may be configured to determine if the piece of memory is stored in the cache memory. The a memory interconnect may be configured to, if the piece of memory is determined to be stored in the cache memory, cancel the second request for the piece of data from the system memory.

    COORDINATED GARBAGE COLLECTION OF FLASH DEVICES IN A DISTRIBUTED STORAGE SYSTEM

    公开(公告)号:US20170123718A1

    公开(公告)日:2017-05-04

    申请号:US15046435

    申请日:2016-02-17

    Abstract: A distributed storage system can include a storage node (125, 130, 135). The storage node (125, 130, 135) can include a Solid State Drive (SSD) or other storage device that employs garbage collection (140, 145, 150, 155, 160, 165, 225, 230), a device garbage collection monitor (205), a garbage collection coordinator (210), an Input/Output (I/O) redirector (215), and an I/O resynchronizer (220). The device garbage collection monitor (205) can determine whether any storage devices (140, 145, 150, 155, 160, 165, 225, 230) need to perform garbage collection. The garbage collection coordinator (210) can schedule when the storage device (140, 145, 150, 155, 160, 165, 225, 230) can perform garbage collection. The I/O redirector (215) can redirect read requests (905) and write requests (1005) away from the storage device (140, 145, 150, 155, 160, 165, 225, 230) when it is performing garbage collection. The I/O resynchronizer (220) can ensure that data on the storage device (140, 145, 150, 155, 160, 165, 225, 230) is up-to-date after garbage collection finishes.

Patent Agency Ranking