Abstract:
A data processing method and apparatus, where the method comprises an operating system loads a task code to a reserved hardware thread such that the reserved hardware thread executes the task code subsequently after receiving a hardware thread reservation request. Alternatively, in a process in which an operating system loads a task code to a hardware thread for execution, the hardware thread loads the small task code to a reserved hardware thread for execution without a need to create a thread for a task code corresponding to each task when the hardware thread reads a flag of a small task code.
Abstract:
A memory management method implemented by a requesting node includes sending first indication information used for indicating a length of memory required by the requesting node and receiving second indication information used for indicating first remote memory provided to the requesting node by a target contributing node in at least one contributing node that can provide remote memory. The method also includes determining, from available virtual addresses, a first virtual address corresponding to the first remote memory, and sending a first data read/write instruction for the first data when first data whose pointer is within a range of the first virtual address needs to be read/written, where the first data read/write instruction includes third indication information, and the third indication information is used for indicating storage space, for storing the first data, in the first remote memory.
Abstract:
A data caching method and a computer system are provided. In the method, when a miss of an access request occurs and a cache needs to determine a to-be-replaced cache line, not only a historical access frequency of the cache line but also a type of a memory corresponding to the cache line needs to be considered. A cache line corresponding to a DRAM type may be preferably replaced, which reduces a caching amount in the cache for data stored in a DRAM and relatively increase a caching amount for data stored in an NVM., For an access request for accessing the data stored in the NVM, corresponding data can be found in the cache whenever possible, thereby reducing cases of reading data from the NVM. Thus, a delay in reading data from the NVM is reduced, and access efficiency is effectively improved.
Abstract:
A data processing method and apparatus, where the method comprises an operating system loads a task code to a reserved hardware thread such that the reserved hardware thread executes the task code subsequently after receiving a hardware thread reservation request. Alternatively, in a process in which an operating system loads a task code to a hardware thread for execution, the hardware thread loads the small task code to a reserved hardware thread for execution without a need to create a thread for a task code corresponding to each task when the hardware thread reads a flag of a small task code.
Abstract:
A cross-page prefetching method, apparatus, and system are disclosed, which can improve a prefetching hit ratio of a prefetching device, and further improve efficiency of memory access. The method includes: receiving an indication message, sent by a cache, that a physical address is missing, where the indication message carries a mapped-to first physical address and contiguity information of a first physical page to which the first physical address belongs; acquiring a prefetching address according to the first physical address and a step size that is stored in a prefetching device; and if a page number of a physical page to which the prefetching address belongs is different from a page number of the first physical page, and it is determined, according to the contiguity information of the first physical page, that the first physical page is contiguous, prefetching data at the prefetching address.
Abstract:
A memory management method implemented by a requesting node includes sending first indication information used for indicating a length of memory required by the requesting node and receiving second indication information used for indicating first remote memory provided to the requesting node by a target contributing node in at least one contributing node that can provide remote memory. The method also includes determining, from available virtual addresses, a first virtual address corresponding to the first remote memory, and sending a first data read/write instruction for the first data when first data whose pointer is within a range of the first virtual address needs to be read/written, where the first data read/write instruction includes third indication information, and the third indication information is used for indicating storage space, for storing the first data, in the first remote memory.
Abstract:
A method and an apparatus for accessing a hardware resource are provided. The method includes configuring permission for one or more privileged instructions that are used for hardware access such that when the privileged instructions are used by a user mode application program, the application program can access a hardware resource without trapping into a kernel, and executing the privileged instructions that are encapsulated in the privileged application programming interface (API) that is called at the code level by the application program , and a privileged instruction for direct access to a hardware resource is set and encapsulated into an API, which is deployed in user space in order to reduce system overheads for accessing the hardware resource and improve processing efficiency.
Abstract:
A multilevel cache-based data read/write method and a computer system. The method includes acquiring a query address of a physical memory data block in which data is to be read/written, acquiring a cache location attribute of the physical memory data block, querying whether a cache is hit until one cache is hit or all caches are missed, where the querying is performed according to the query address in descending order of levels of caches storable for the physical memory data block, and the levels of the caches are indicated by the cache location attribute, and if one cache is hit, reading/writing the data in the query address of the physical memory data block in the hit cache; or, if all caches are missed, reading/writing the data in the query address of the physical memory data block in a memory.
Abstract:
A memory system, a method for processing a memory access request, and a computer system are provided. The memory system includes a first memory and a second memory that are of different types and separately configured to store operating data of a processor; a memory indexing table that stores a fetch address of a data unit block located in the first memory; a buffer scheduler configured to receive a memory access request of a memory controller, determine whether the data unit block corresponding to the fetch address is stored in the first memory or the second memory, and complete a fetch operation of the memory access request in the determined memory. A memory access request may be separately completed in different type of memory, which is transparent to an operating system, does not cause page fault, and can improve a memory access speed.
Abstract:
A method and an apparatus for constructing a file system in a key-value storage system. According to the method for constructing a file system in a key-value storage system disclosed by the present invention, a directory number corresponding to a directory path of a directory at each level is acquired first; then, according to the directory number and a file stored in the directory at each level, corresponding keywords Key of the directory and the file are constructed.