Storage space management method and apparatus

    公开(公告)号:US10261715B2

    公开(公告)日:2019-04-16

    申请号:US15625385

    申请日:2017-06-16

    Abstract: A storage space management method and apparatus, where the method and apparatus are applied to a non-volatile memory (NVM). In a feature set that includes M image features of M idle blocks in storage space of the NVM, an idle block whose image feature is highly similar to an image feature of data to be written into the NVM is determined such that the data is written into the idle block. In this way, wear and energy consumption problems are considered during storage space allocation, and a write operation of an idle block in storage space of an NVM can consume less energy, thereby extending a life span of the NVM and reducing write operation energy consumption.

    FILE MANAGEMENT METHOD, DISTRIBUTED STORAGE SYSTEM, AND MANAGEMENT NODE

    公开(公告)号:US20190073130A1

    公开(公告)日:2019-03-07

    申请号:US16178220

    申请日:2018-11-01

    Abstract: A file management method, a distributed storage system, and a management node are disclosed. In the distributed storage system, after receiving a file creation request sent by a host for requesting to create a file in a distributed storage system, a management node allocates, to the file, first virtual space from global virtual address space of the distributed storage system, where local virtual address space of each storage node in the distributed storage system is corresponding to a part of the global virtual address space. Then, the management node records metadata of the file, where the metadata of the file includes information about the first virtual space, and the information about the first virtual space is used to point to local virtual address space of a storage node that is used to store the file. Further, the management node sends, the information about the first virtual space to the host.

    Processing Node, Computer System, and Transaction Conflict Detection Method

    公开(公告)号:US20180373634A1

    公开(公告)日:2018-12-27

    申请号:US16047892

    申请日:2018-07-27

    Abstract: A processing node, a computer system, and a transaction conflict detection method, where the processing node includes a processor and a transactional cache. When obtaining a first operation instruction in a transaction for accessing shared data, the processor accesses the transactional cache for caching shared data of a transaction processed by the processing node. If the transactional cache determines that the first operation instruction fails to hit a cache line in the transactional cache, the transactional cache sends a first destination address in the operation instruction to a transactional cache in another processing node. After receiving status information of a cache line hit by the first destination address from the other processing node, the transactional cache determines, based on the received status information, whether the first operation instruction conflicts with a second operation instruction executed by the other processing node.

    File Access Method and Apparatus, and Storage Device

    公开(公告)号:US20170262172A1

    公开(公告)日:2017-09-14

    申请号:US15606423

    申请日:2017-05-26

    Abstract: A file access method and apparatus, and a storage device are presented, where the file access method is applied to a storage device in which a file system is established based on a memory. The storage device obtains, according to a file identifier of a to-be-accessed first target file, an index node of the first target file in metadata, where the index node of the first target file stores information about first virtual space of the first target file in global virtual space. The storage device maps the first virtual space onto second virtual space of a process, and performs addressing on an added file management register to access the first target file according to a start address of the first virtual space and a base address of a page directory of the global file page table stored in the file management register.

    Haptic stimulation systems and methods

    公开(公告)号:US11975359B2

    公开(公告)日:2024-05-07

    申请号:US17511281

    申请日:2021-10-26

    Inventor: Jun Xu Fei Liu

    CPC classification number: B06B1/0284 H04R1/00

    Abstract: The disclosure relates to technology for haptic stimulation. A haptic stimulation device comprises a set of haptic stimulation elements. Each haptic stimulation element comprises a transducer configured to generate a pressure wave and an enclosure coupled to the transducer thereby forming a cavity bounded by the enclosure and the transducer. The haptic stimulation device comprises a controller configured to drive the transducers to generate a haptic stimulation pattern based on pressure waves in the cavities.

    Data updating technology
    37.
    发明授权

    公开(公告)号:US11698728B2

    公开(公告)日:2023-07-11

    申请号:US17863443

    申请日:2022-07-13

    Abstract: A storage system includes a management node and a plurality of storage nodes forming a redundant array of independent disks (RAID). When the management node determines that not all data in an entire stripe is updated based on a received write request, the management node sends an update data chunk obtained from to-be-written data to a corresponding storage node. The storage node does not directly update, based on the received update data chunk, a data block stored in a storage device of the storage node, but store the update data chunk into a non-volatile memories (NVM) cache of the storage node and send the update data chunk to another storage node for backup. According to the data updating method, write amplification problems caused in a stripe update process can be reduced, thereby improving update performance of the storage system.

    DATA UPDATING TECHNOLOGY
    38.
    发明申请

    公开(公告)号:US20220342541A1

    公开(公告)日:2022-10-27

    申请号:US17863443

    申请日:2022-07-13

    Abstract: A storage system includes a management node and a plurality of storage nodes forming a redundant array of independent disks (RAID). When the management node determines that not all data in an entire stripe is updated based on a received write request, the management node sends update data chunk obtained from to-be-written data to corresponding storage node. The storage node do not directly update, based on the received update data chunks, data block stored in storage device of the storage node, but store the update data chunk into non-volatile memories (NVM) cache of the storage node and send the update data chunk to another storage node to backup. According to the data updating method, write amplification problems caused in a stripe update process can be reduced, thereby improving update performance of the storage system.

    Model parameter fusion method and apparatus

    公开(公告)号:US11373116B2

    公开(公告)日:2022-06-28

    申请号:US15980496

    申请日:2018-05-15

    Abstract: Embodiments of the present invention provide a model parameter fusion method and apparatus, which relate to the field of machine learning and intend to reduce a data transmission amount and implement dynamical adjustment of computing resources during model parameter fusion. The method includes: dividing, by an ith node, a model parameter of the ith node into N blocks, where the ith node is any node of N nodes that participate in a fusion, and 1≤i≤N≤M; receiving, by the ith node, ith model parameter blocks respectively sent by other nodes of the N nodes than the ith node; fusing, by the ith node, an ith model parameter block of the ith node and the ith model parameter blocks respectively sent by the other nodes, so as to obtain the ith general model parameter block; and distributing, by the ith node, the ith general model parameter block to the other nodes of the N nodes.

    Capacity expansion method and apparatus

    公开(公告)号:US11216310B2

    公开(公告)日:2022-01-04

    申请号:US16523028

    申请日:2019-07-26

    Abstract: A capacity expansion method includes obtaining a measured workload of a service of an application, obtaining an application model of the application, and obtaining a measured workload of each upper-level service of the service; determining a predicted workload of the service based on the measured workload of the service, determining the measured workload of each upper-level service of the first service, and determining a first workload ratio corresponding to a first calling relationship; and determining a predicted workload of each lower-level service based on the predicted workload of the service and determining a second workload ratio corresponding to a second calling relationship.

Patent Agency Ranking