Abstract:
An access request processing apparatus comprises, a processor determines an object cache page according to a write request when receiving the write request. After determining that the NVM stores a log chain of the object cache page, the processor inserts, into the log chain of the object cache page, a second data node recording information about a second log data chunk. The log chain already includes a first data node recording information about the first log data chunk. The second log data chunk is at least partial to-be-written data of the write request. Then, the processor sets, in the first data node, data that is in the first log data chunk and that overlaps the second log data chunk to invalid data.
Abstract:
An access request processing apparatus comprises, a processor determines an object cache page according to a write request when receiving the write request. After determining that the NVM stores a log chain of the object cache page, the processor inserts, into the log chain of the object cache page, a second data node recording information about a second log data chunk. The log chain already includes a first data node recording information about the first log data chunk. The second log data chunk is at least partial to-be-written data of the write request. Then, the processor sets, in the first data node, data that is in the first log data chunk and that overlaps the second log data chunk to invalid data.
Abstract:
A mapping processing method and apparatus for a cache address, where the method includes acquiring a physical address corresponding to an access address sent by a processing core, where the physical address includes a physical page number (PPN) and a page offset, mapping the physical address to a Cache address, where the Cache address includes a Cache set index 1, a Cache tag, a Cache set index 2, and a Cache block offset in sequence, where the Cache set index 1 with a high-order bit and the Cache set index 2 with a low-order bit together form a Cache set index, and the Cache set index 1 falls within a range of the PPN. Some bits of a PPN of a huge page PPN are mapped to a set index of a Cache so that the bits can be colored by an operating system.
Abstract:
An access request processing method and apparatus, and a computer system is disclosed. The computer system includes a processor and a non-volatile memory (NVM). When receiving a write request, the processor determines an object cache page according to the write request. After determining that the NVM stores a log chain of the object cache page, the processor inserts, into the log chain of the object cache page, a second data node recording information about a second log data chunk. The log chain already includes a first data node recording information about the first log data chunk. The second log data chunk is at least partial to-be-written data of the write request. Then, the processor sets, in the first data node, data that is in the first log data chunk and that overlaps the second log data chunk to invalid data.
Abstract:
A message-based memory access apparatus and an access method thereof are disclosed, The message-based memory access apparatus includes: a message-based command bus, configured to transmit a message-based memory access instruction generated by the CPU to instruct a memory system to perform a corresponding operation; a message-based memory controller, configured to package a CPU request into a message packet and sent the packet to a storage module, and parse a message packet returned by the storage module and return data to the CPU; a message channel, configured to transmit a request message packet and a response message packet; and the storage module, including a buffer scheduler, and configured to receive the request packet from the message-based memory controller and process the corresponding request.
Abstract:
A memory monitoring method and a computing system. The computing system includes a processor, a memory and a monitor. The monitor obtains memory unit access information and process information of the computer system. The memory unit access information includes the number of access times of each memory unit of the memory. The process information includes information about a mapping relationship between a virtual address and a physical address of each memory units accessed by the current running process. After generating monitoring information, which includes the frequency at which the current running process accesses each memory unit, according to the memory unit access information and the process information, the monitor feeds the monitoring information back to the processor. Thus, the processor can perform memory management according to the monitoring information.
Abstract:
A method and an apparatus for pushing memory data from a memory to a push destination storage used to store data prefetched by a central processing unit (CPU) in a computing system are disclosed. In the method, a memory controller of the computing system periodically generates a push command according to a push period. Then the memory controller acquires a push parameter of to-be-pushed data according to the push command and sends at least one memory access request to memory according to the push parameter, where the at least one memory access request is used to request the to-be-pushed data from the memory. After receiving the to-be-pushed data that is sent according to the at least one memory access request by the memory, the memory controller buffers the to-be-pushed data and pushes the to-be-pushed data from the data buffer to the push destination storage.
Abstract:
A method and an apparatus for pushing memory data from a memory to a push destination storage used to store data prefetched by a central processing unit (CPU) in a computing system are disclosed. In the method, a memory controller of the computing system periodically generates a push command according to a push period. Then the memory controller acquires a push parameter of to-be-pushed data according to the push command and sends at least one memory access request to memory according to the push parameter, where the at least one memory access request is used to request the to-be-pushed data from the memory. After receiving the to-be-pushed data that is sent according to the at least one memory access request by the memory, the memory controller buffers the to-be-pushed data and pushes the to-be-pushed data from the data buffer to the push destination storage.
Abstract:
A memory management method and a device, where the method includes: receiving a memory access request, where the memory access request carries a virtual address; determining a page fault type of the virtual address if finding, in a translation lookaside buffer TLB and a memory, no page table entry corresponding to the virtual address; allocating a corresponding page to the virtual address if the page fault type of the virtual address is a blank-page-caused page fault, where the blank-page-caused page fault means that no corresponding page is allocated to the virtual address; and updating the page table entry corresponding to the virtual address to the memory and the TLB. The memory manager does not generate a page fault when a blank-page-caused page fault occurs, but allocates a corresponding page to the virtual address. Therefore, a quantity of occurrences of the page fault is reduced, thereby improving memory management efficiency.
Abstract:
A memory access processing method is based on memory chip interconnection, a memory chip, and a system, which relate to the field of electronic devices, and can shorten a time delay in processing a memory access request and improve a utilization rate of system bandwidth. The method of the present invention includes: receiving, by a first memory chip, a memory access request; and if the first memory chip is not a target memory chip corresponding to the memory access request, sending, according to a preconfigured routing rule, the memory access request to a next memory chip connected with the first memory chip, until the target memory chip corresponding to the memory access request is determined. Embodiments of the present invention are mainly used in a process of processing a memory access request.