SYSTEMS, METHODS, AND DEVICES FOR PAGE RELOCATION FOR GARBAGE COLLECTION

    公开(公告)号:US20230019878A1

    公开(公告)日:2023-01-19

    申请号:US17504495

    申请日:2021-10-18

    Abstract: A method for page management in a memory system may include allocating a page of a mirror memory, copying a valid page from a block of device memory at a device to the page of the mirror memory, remapping the valid page from the block of device memory to the mirror memory, and modifying the block of device memory. The method may further include copying the valid page from the mirror memory to a free page at the device, and remapping the valid page from the mirror memory to the free page at the device. The remapping may be performed using a memory coherent interface. The method may further include deallocating a portion of the mirror memory associated with the valid page based on copying the valid page from the mirror memory.

    OFFLOADED DEVICE-DRIVEN ERASURE CODING

    公开(公告)号:US20220326852A1

    公开(公告)日:2022-10-13

    申请号:US17850984

    申请日:2022-06-27

    Abstract: A method for storing data may include receiving user data at a group of storage devices, wherein the storage devices are interconnected, erasure coding the user data into redundancy blocks at the group of storage devices, and storing the redundancy blocks on at least two of the storage devices. The erasure encoding may be distributed among at least two of the storage devices. The redundancy blocks may be arranged in reliability groups. The redundancy blocks may be grouped by the storage devices independently of the partitioning of the user data by the user. The method may further include recovering data based on redundancy blocks. A storage device may include a storage medium, a network interface configured to communicate with one or more other storage devices, and a storage processing unit configured to erasure code user data into redundancy blocks cooperatively with the one or more other storage devices.

    SYSTEMS, METHODS, AND DEVICES FOR SHUFFLE ACCELERATION

    公开(公告)号:US20220164122A1

    公开(公告)日:2022-05-26

    申请号:US17225085

    申请日:2021-04-07

    Abstract: A method of shuffling data may include shuffling a first batch of data using a first memory on a first level of a memory hierarchy to generate a first batch of shuffled data, shuffling a second batch of data using the first memory to generate a second batch of shuffled data, and storing the first batch of shuffled data and the second batch of shuffled data in a second memory on a second level of the memory hierarchy. The method may further include merging the first batch of shuffled data and the second batch of shuffled data. A data shuffling device may include a buffer memory configured to stream one or more records to a partitioning circuit and transfer, by random access, one or more records to a grouping circuit.

    STORAGE DEVICE WITH FAULT RESILIENT READ-ONLY MODE

    公开(公告)号:US20220012130A1

    公开(公告)日:2022-01-13

    申请号:US17109041

    申请日:2020-12-01

    Abstract: A storage device, and a method for operating a storage device. In some embodiments, the storage device includes storage media, and the method includes: determining, by the storage device, that the storage device is in a fault state from which partial recovery is possible by operating the storage device in a first read-only mode; and operating the storage device in the first read-only mode, the operating in the first read-only mode including: determining that the age of a first data item stored in a page of the storage device has exceeded a threshold age, and copying the first data item into a rescue space in the storage device.

    BLOCK INTERFACE EMULATION FOR KEY VALUE DEVICE

    公开(公告)号:US20210182211A1

    公开(公告)日:2021-06-17

    申请号:US16824689

    申请日:2020-03-19

    Inventor: Yang Seok KI

    Abstract: A Key-Value (KV) storage device is disclosed. The KV storage device may include storage for a first object and a second object. Each object may include data associated with a key. A KV translation layer may translate a key to a physical address in the storage where the data is stored. A KV interface may receive a KV request involving an object, and a block interface may receive a block request involving an object. A block emulator may generate a KV request including a key generated from the block request.

    SYSTEM, DEVICE AND METHOD FOR STORAGE DEVICE ASSISTED LOW-BANDWIDTH DATA REPAIR

    公开(公告)号:US20200349006A1

    公开(公告)日:2020-11-05

    申请号:US16932679

    申请日:2020-07-17

    Abstract: According to one general aspect, an apparatus may include a regeneration-code-aware (RCA) storage device configured to calculate at least one type of data regeneration code for data error correction. The RCA storage device may include a memory configured to store data in chunks which, in turn, comprise data blocks. The RCA storage device may include a processor configured to compute, when requested by an external host device, a data regeneration code based upon a selected number of data blocks. The RCA storage device may include an external interface configured to transmit the data regeneration code to the external host device.

    CONDITIONAL TRANSCODING FOR ENCODED DATA
    97.
    发明申请

    公开(公告)号:US20200295779A1

    公开(公告)日:2020-09-17

    申请号:US16820665

    申请日:2020-03-16

    Abstract: A transcoder is disclosed. The transcoder may comprise a buffer to store input encoded data. An index mapper may map an input dictionary to an output dictionary. A current encode buffer may store a modified current encoded data, which may be responsive to the input encoded data, the input dictionary, and the map from the input dictionary to the output dictionary. A previous encode buffer may store a modified previous encoded data, which may be responsive to the input encoded data, the input dictionary, and the map from the input dictionary to the output dictionary. A rule evaluator may generate an output stream responsive to the modified current encoded data in the current encode buffer, the modified previous encoded data in the previous encode buffer, and transcoding rules.

    PLATFORM FOR CONCURRENT EXECUTION OF GPU OPERATIONS

    公开(公告)号:US20200234115A1

    公开(公告)日:2020-07-23

    申请号:US16442440

    申请日:2019-06-14

    Abstract: Computing resources may be optimally allocated for a multipath neural network using a multipath neural network analyzer that includes an interface and a processing device. The interface receives a multipath neural network. The processing device generates the multipath neural network to include one or more layers of a critical path through the multipath neural network that are allocated a first allocation of computing resources that are available to execute the multipath neural network. The critical path limits throughput of the multipath neural network. The first allocation of computing resources reduces an execution time of the multipath neural network to be less than a baseline execution time of a second allocation of computing resources for the multipath neural network. The first allocation of computing resources for a first layer of the critical path is different than the second allocation of computing resources for the first layer of the critical path.

    DISTRIBUTED REAL-TIME COMPUTING FRAMEWORK USING IN-STORAGE PROCESSING

    公开(公告)号:US20170192821A1

    公开(公告)日:2017-07-06

    申请号:US15462797

    申请日:2017-03-17

    CPC classification number: G06F9/5027

    Abstract: According to a general aspect, a method may include receiving a computing task, wherein the computing task includes a plurality of operations. The method may include allocating the computing task to a data node, wherein the data node includes at least one host processor and an intelligent storage medium, wherein the intelligent storage medium comprises at least one controller processor, and a non-volatile memory, wherein each data node includes at least three processors between the at least one host processor and the at least one controller processor. The method may include dividing the computing task into at least a first chain of operations and a second chain of operations. The method may include assigning the first chain of operations to the intelligent storage medium of the data node. The method may further include assigning the second chain of operations to the central processor of the data node.

Patent Agency Ranking