ERROR CORRECTION IN DATA STORAGE DEVICES
    1.
    发明申请

    公开(公告)号:US20200264953A1

    公开(公告)日:2020-08-20

    申请号:US16281039

    申请日:2019-02-20

    Inventor: Minghai QIN

    Abstract: Systems and methods are disclosed for error correction in data storage devices. In some implementations, a method is provided. The method includes obtaining configuration data indicating a logical arrangement for a set of blocks. The logical arrangement includes rows and columns of blocks. The configuration data also indicates a number of row parity blocks in a set of row parity blocks and a number of diagonal parity blocks in a set of diagonal parity blocks. The method also includes configuring a set of storage devices based on the configuration data, wherein a first number of data blocks in the set of diagonal parity blocks is less than a second number of data blocks in a column.

    NON-BINARY ECCS FOR LOW LATENCY READ, FRACTIONAL BITS PER CELL OF NAND FLASH

    公开(公告)号:US20180181317A1

    公开(公告)日:2018-06-28

    申请号:US15388623

    申请日:2016-12-22

    CPC classification number: G11C29/52 G06F11/1072

    Abstract: The present disclosure generally relates to methods of reading data from a memory device using non-binary ECCs. The memory device includes multiple memory cells where each memory cell has multiple pages that are arranged in distinct layouts for physical addresses thereof. When a read request is received from a host device to obtain data from a specific page of a specific memory cell of a memory device, rather than reading the data from all pages of the memory cell, the data can be read from just the desired page and then decoded. Following decoding, the data can be delivered to the host device. Because only the data from a specific page of a memory cell is read, rather than the entire memory cell, the read latency is reduced when compared to reading the entire memory cell.

    REDUCING DATA COMMUNICATIONS IN DISTRIBUTED INFERENCE SCHEMES

    公开(公告)号:US20240095540A1

    公开(公告)日:2024-03-21

    申请号:US18343173

    申请日:2023-06-28

    CPC classification number: G06N3/098

    Abstract: Methods and apparatus for processing data in a distributed inference scheme based on sparse inputs. An example method includes receiving an input at a first node. A first sparsified input is generated for a second node based on a set of features associated with the second node, which are identified based on a weight mask having non-zero values for weights associated with features upon which processing by the second node depends and zeroed values for weights associated with other features. The first sparsified input is transmitted to the second node for generating an output of the second node. A second sparsified input is received from the second node and combined into a combined input. The combined input is processed into an output of the first node. The neural network is configured to generate an inference based on processing the outputs of the first node and the second node.

    ITERATION-ADAPTIVE DECODING FOR NON-BINARY LDPC CODES

    公开(公告)号:US20180351577A1

    公开(公告)日:2018-12-06

    申请号:US15608174

    申请日:2017-05-30

    Abstract: Embodiments of a data storage device include a non-volatile memory and a controller coupled to the non-volatile memory. The controller includes a decoder configured to decode a non-binary code, such as a low-density parity-check (LDPC) code. The decoder decodes the code by generating variable-node-to-check-node message vectors and by generating check-node-to-variable-node message vectors. When generating variable-node-to-check-node message vectors, the decoder considering a first number and then a second greater number of components of the variable-node-to-check-node message vectors. Embodiments of a method of decoding non-binary codes, such as non-binary LDPC codes, include generating variable node message vectors and check node message vectors in logarithm form. The check node message vectors are generated at a first complexity less than a full complexity of considering all components of the variable node message vectors and generated at a second complexity greater than the first complexity.

    DATA MAPPING ENABLING FAST READ MULTI-LEVEL 3D NAND TO IMPROVE LIFETIME CAPACITY

    公开(公告)号:US20180182453A1

    公开(公告)日:2018-06-28

    申请号:US15472326

    申请日:2017-03-29

    CPC classification number: G11C11/5628 G11C16/0483 G11C16/08 G11C2211/5641

    Abstract: In this disclosure, data mapping based on three dimensional lattices that have an improved sum rate (i.e., lifetime capacity) with low read latency is disclosed. During the write, a memory location is written to multiple times prior to erasure. Specifically, for the first write, there are 4/3 bits per cell available for writing, which is about 10.67 kB per cell are used for data storage. Then, for the second write, there is one bit per cell, which is 8 kB per cell for data storage. If considering a block with 128 different cells and writing 32 kB of data, the first write results in 42.66 data writes while the second write results in 32 writes for a total of 74.66 writes. Previously, the number of writes for 32 kB would be 64 writes. Thus, by writing twice prior to erasure, more data can be stored.

    DECODING DATA USING DECODERS AND NEURAL NETWORKS

    公开(公告)号:US20200099401A1

    公开(公告)日:2020-03-26

    申请号:US16141806

    申请日:2018-09-25

    Inventor: Minghai QIN

    Abstract: Systems and methods are disclosed for decoding data. A first block of data may be obtained from a storage medium or received from a computing device. The first block of data includes a first codeword generated based on an error correction code. A first set of likelihood values is obtained from a neural network. The first set of likelihood values indicates probabilities that the first codeword will be decoded into one of a plurality of decoded values. A second set of likelihood values is obtained from a decoder based on the first block of data. The second set of likelihood values indicates probabilities that the first codeword will be decoded into one of the plurality of decoded values. The first codeword is decoded to obtain a decoded value based on the first set of likelihood values and the second set of likelihood values.

Patent Agency Ranking