摘要:
A system and method relates to detecting a hardware event, determining a first virtual memory address associated with the hardware event, wherein the first virtual memory address is associated with a first processing thread, identifying, using the first virtual memory address, an entry of a logical address table, the entry comprising a file descriptor and a file offset associated with a file, identifying a memory address table associated with the file descriptor, translating, using the memory address table, the file offset into a second virtual memory address associated with a second processing thread, and transmitting, to the second processing thread, a notification comprising the second virtual memory address.
摘要:
A memory swap operation comprises writing information about a process in which a page fault occurred, into a temporary memory using a processor of a host, copying a page in which the page fault occurred, from a memory device recognized as a swap memory into a main memory of the host, and after completing the copying of the page, resuming the process in which the page fault occurred, using the information about the process, written in the temporary memory.
摘要:
Prefetching is permitted to cross from one physical memory page to another. More specifically, if a stream of access requests contains virtual addresses that map to more than one physical memory page, then prefetching can continue from a first physical memory page to a second physical memory page. The prefetching advantageously continues to the second physical memory page based on the confidence level and prefetch distance established while the first physical memory page was the target of the access requests.
摘要:
An apparatus and method to translate virtual addresses to physical addresses in a base plus offset addressing mode are disclosed. In an embodiment, a method includes performing a first translation lookaside buffer (TLB) lookup based on a base address value to retrieve a speculative physical address. While performing the TLB lookup based on the base address value, the base address value is added to an offset value to generate an effective address value. The method also includes performing a comparison of the base address value and the effective address value based on a variable page size to determine whether the speculative physical address corresponds to the effective address.
摘要:
A fetch section of a processor comprises an instruction cache and a pipeline of several stages for obtaining instructions. Instructions may cross cache line boundaries. The pipeline stages process two addresses to recover a complete boundary crossing instruction. During such processing, if the second piece of the instruction is not in the cache, the fetch with regard to the first line is invalidated and recycled. On this first pass, processing of the address for the second part of the instruction is treated as a pre-fetch request to load instruction data to the cache from higher level memory, without passing any of that data to the later stages of the processor. When the first line address passes through the fetch stages again, the second line address follows in the normal order, and both pieces of the instruction are can be fetched from the cache and combined in the normal manner.
摘要:
An end of a queue or a page-crossing within a queue is detected. A virtual memory address for the head of the queue or for the next queue page is pre-translated into a physical memory address while the last entry in the queue or in the current queue page is being serviced.
摘要:
A data processing system having a virtual memory comprising pages which are relocatable between various levels of physical storage. The virtual memory primarily contains information comprising instructions arranged sequentially so that once a virtual address has been translated to the physical address in high speed memory, the physical address can be incremented to fetch the next sequential information. Branch instructions may branch to the address of information on the same or another page. The branch instruction includes an indicator as to whether the branch address is a physical address on the same or another page or a virtual address on another page. Only when a virtual address is encountered is the relocation table employed to convert the virtual address to a physical address and to load the page if necessary.
摘要:
In a microprogrammed processor, a pair of register means and an associative store are arranged to eliminate the need to translate, for each microinstruction, a logical address to a real address to access main storage. Translation is required only once for each program or machine level (macro) instruction. The real addresses of the first bytes of the current instruction and its operand(s) are stored in a first one of the register means and are normally incremented to access the remainder of the instruction and operands byte-by-byte. In addition, the first register means and incrementer can be used to access sequentially stored instructions in a program sequence without address translation. When a page boundary is crossed during said incrementing, the logical page address of the current instruction or operand (which is at the boundary) is read from the second register means and is incremented to form the logical address of the next sequential page. This new logical address is searched in the associative array. If a match occurs, the new logical address is stored in the second register means, and the corresponding real address is stored in the first register means. This hardware translate means significantly reduces translate time.
摘要:
Methods that may be performed by a host controller of a computing device for synchronizing logical-to-physical (L2P) tables before entering a hibernate mode are disclosed. Embodiment methods may include determining whether a first L2P table stored in a dynamic random-access memory (DRAM) communicatively connected to the host controller is out of synchronization with a second L2P table stored in a static random-access memory (SRAM) of a universal flash storage (UFS) device communicatively connected to the host controller via a link. If the first and second L2P tables are out of synch, the host controller may retrieve at least one modified L2P map entry from the second L2P table when the UFS device is configured to enter a hibernate mode from the UFS device, and update the first L2P tabled with the at least one modified L2P map entry before the link and the UFS device enter the hibernate mode.
摘要:
Methods that may be performed by a host controller of a computing device for synchronizing logical-to-physical (L2P) tables before entering a hibernate mode are disclosed. Embodiment methods may include determining whether a first L2P table stored in a dynamic random-access memory (DRAM) communicatively connected to the host controller is out of synchronization with a second L2P table stored in a static random-access memory (SRAM) of a universal flash storage (UFS) device communicatively connected to the host controller via a link. If the first and second L2P tables are out of synch, the host controller may retrieve at least one modified L2P map entry from the second L2P table when the UFS device is configured to enter a hibernate mode from the UFS device, and update the first L2P tabled with the at least one modified L2P map entry before the link and the UFS device enter the hibernate mode.