摘要:
A data processing system includes at least one system processor, chipset core logic, main memory to store computer software and data including operating system software, and a graphics address remapping table (GART). The chipset logic operates on first-sized real memory pages, while the operating system operates on larger, second-sized virtual memory pages. In an embodiment GART driver software maps each virtual page to Z continuous or non-contiguous real pages by filling up the GART with Z entries per virtual page, where Z is the rounded integer number of first-sized pages per second-sized page. In another embodiment, an address translation function converts a target address, corresponding to an address within a virtual page, issuing from a processor into a second address, corresponding to a base address of a real page in main memory. Also described are an integrated circuit and a computer-readable medium to map memory pages of disparate sizes.
摘要:
A computer system supports virtual memory and a paging mechanism. When a new process is created, this occupies one or more memory region. In one embodiment, at least a first memory region occupied by the process at a first virtual address has predefined, fixed, page characteristics (for example page size). It turns out that these are not optimum for the performance of the process. In order to address this, a routine in a shared library is invoked to copy the component from the first memory region into a second memory region. The second memory region either has different page characteristics from the first memory region (for example, a different page size), or is modifiable to have such different page characteristics. The second memory region is reallocated in virtual memory so that it replaces the first memory region at the first virtual address. The overall consequence of this is that at least one component of the process can now operate at a more suitable page characteristic (such as page size), thereby leading to improved performance.
摘要:
Various approaches for demoting a memory page are described. In one approach, a first new page is established from a subpage of a base page in response to a request to demote a specified subpage. The size of the first new page is selected from a plurality of page sizes. For each portion of the base page less the first new page, the portion is divided into one or more pages of a selected size. The selected size for the pages is a largest of the plurality of page sizes that is less than or equal to the size of the portion. If the new one or more pages do not encompass the entire portion, a new feasible, largest of the sizes is selected and the part of the portion not encompassed is further divided into one or more pages.
摘要:
A method and apparatus for efficiently storing an effective address (EA) in an effective to real address translation (ERAT) table supporting multiple page sizes by adding PSI fields, based on the number of unique page sizes supported, to each ERAT entry and using one ERAT entry to store an EA for a memory page, regardless of page size, by setting the PSI fields to indicate the page size.
摘要:
A method and system for resolving virtual addresses using a page size tag are described herein. In one embodiment, the method comprises translating a virtual memory address into physical memory address. According to the method, the translating includes producing a first page size tag and choosing an entry in a translation lookaside buffer, wherein the entry stores a second page size tag and a page frame number. The method also includes comparing the first page size tag with the second page size tag. The method also includes using the page frame number to form the physical memory address, if the first page size tag is less than or equal to the second page size tag.
摘要:
The present invention, in various embodiments, provides techniques for managing memory in computer systems. In one embodiment, each memory page is divided into relocation blocks located at various physical locations, and a relocation table is created with entries used to locate these blocks. To access memory for a particular piece of data, a program first uses a virtual address of the data, which, through a translation look-aside buffer, is translated into a physical address within the computer system. Using the relocation table, the physical address is then translated to a relocation address that identifies the relocation block containing the requested data. From the identified relocation block, the data is returned to the program.
摘要:
A data processor in which a speed of an address translating operation is raised is disclosed. A translation lookaside buffer is divided into a buffer for data and a buffer for instruction, address translation information for instruction is also stored into a translation lookaside buffer for data, and when a translation miss occurs in a translation lookaside buffer for instruction, new address translation information is fetched from the translation lookaside buffer for data. A high speed of the address translating operation can be realized as compared with that in case of obtaining address translation information from an external address translation table each time a translation miss occurs in the translation lookaside buffer for instruction.
摘要:
A computer micro-architecture employing a prevalidated cache tag design includes circuitry to support virtual address aliasing and multiple page sizes. Support for various levels of address aliasing are provided through a physical address CAM, page size mask compares and a column copy tag function. Also supported are address aliasing that invalidates aliased lines, address aliasing with TLB entries with the same page sizes, and address aliasing the TLB entries of different sizes. Multiple page sizes are supported with extensions to the prevalidated cache tag design by adding page size mask RAMs and virtual and physical address RAMs.
摘要:
A flexible address mapping method and mechanism allows mapping regions of a microcontroller's memory and I/O address spaces for a variety of applications by defining memory regions which are mapped to one of a set of physical devices by a programmable address mapper controlled by a set of programmable address registers. The mapping allows setting attributes for a memory region to prohibit writes, caching, and code execution. A deterministic priority scheme allows memory regions to overlap, mapping addresses in overlapping regions to the device specified by the highest priority programmable address register.
摘要:
A cache with a translation lookaside buffer (TLB) that reduces the time required for retrieval of a physical address from the TLB when accessing the cache in a system that supports variable page sizing. The TLB includes a content addressable memory (CAM) containing the virtual page numbers corresponding to pages in the cache and a random access memory (RAM) storing the physical page numbers of the pages corresponding to the virtual page numbers in the CAM. The physical page number RAM stores a page mask along with the physical page numbers, and includes local multiplexers which perform virtual address bypassing of the physical page number when the page has been masked.