Software assisted translation lookaside buffer search mechanism
    31.
    发明授权
    Software assisted translation lookaside buffer search mechanism 失效
    软件辅助翻译后备缓冲搜索机制

    公开(公告)号:US08364933B2

    公开(公告)日:2013-01-29

    申请号:US12641766

    申请日:2009-12-18

    CPC classification number: G06F12/1018 G06F12/1027 G06F2212/652

    Abstract: A computer implemented method searches a unified translation lookaside buffer. Responsive to a request to access the unified translation lookaside buffer, a first order code within a first entry of a search priority configuration register is identified. A unified translation lookaside buffer is then searched according to the first order code for a hashed page entry. If the hashed page entry is not found when searching a unified translation lookaside buffer according to the first order code, a second order code is identified within a second entry of the search priority configuration register. The unified translation lookaside buffer is then searched according to the second order code for the hashed page entry.

    Abstract translation: 计算机实现的方法搜索统一的翻译后备缓冲器。 响应于访问统一翻译后备缓冲器的请求,识别搜索优先级配置寄存器的第一条目内的第一订单代码。 然后根据散列页面条目的第一个订单代码搜索统一的翻译后备缓冲区。 如果在根据第一订单代码搜索统一的翻译后备缓冲器时找不到散列页条目,则在搜索优先级配置寄存器的第二条目内识别第二订单代码。 然后根据散列页面条目的二阶代码搜索统一的翻译后备缓冲区。

    TLB EXCLUSION RANGE
    33.
    发明申请
    TLB EXCLUSION RANGE 有权
    TLB排除范围

    公开(公告)号:US20130024648A1

    公开(公告)日:2013-01-24

    申请号:US13618730

    申请日:2012-09-14

    CPC classification number: G06F12/1027 G06F2212/652 G06F2212/654

    Abstract: A system and method for accessing memory are provided. The system comprises a lookup buffer for storing one or more page table entries, wherein each of the one or more page table entries comprises at least a virtual page number and a physical page number; a logic circuit for receiving a virtual address from said processor, said logic circuit for matching the virtual address to the virtual page number in one of the page table entries to select the physical page number in the same page table entry, said page table entry having one or more bits set to exclude a memory range from a page.

    Abstract translation: 提供了一种访问存储器的系统和方法。 该系统包括用于存储一个或多个页表条目的查找缓冲器,其中所述一个或多个页表条目中的每一个包括至少虚拟页码和物理页号; 用于从所述处理器接收虚拟地址的逻辑电路,所述逻辑电路用于将所述虚拟地址与所述页表项之一中的虚拟页号进行匹配,以选择所述同一页表项中的所述物理页号,所述页表项具有 一个或多个位被设置为从页面排除存储器范围。

    LARGE-PAGE OPTIMIZATION IN VIRTUAL MEMORY PAGING SYSTEMS
    34.
    发明申请
    LARGE-PAGE OPTIMIZATION IN VIRTUAL MEMORY PAGING SYSTEMS 审中-公开
    虚拟内存寻呼系统中的大型优化

    公开(公告)号:US20120265963A1

    公开(公告)日:2012-10-18

    申请号:US13529473

    申请日:2012-06-21

    Applicant: Ole AGESEN

    Inventor: Ole AGESEN

    Abstract: A computer system that is programmed with virtual memory accesses to physical memory employs multi-bit counters associated with its page table entries. When a page walker visits a page table entry, the multi-bit counter associated with that page table entry is incremented by one. The computer operating system uses the counts in the multi-bit counters of different page table entries to determine where large pages can be deployed effectively. In a virtualized computer system having a nested paging system, multi-bit counters associated with both its primary page table entries and its nested page table entries are used. These multi-bit counters are incremented during nested page walks. Subsequently, the guest operating systems and the virtual machine monitors use the counts in the appropriate multi-bit counters to determine where large pages can be deployed effectively.

    Abstract translation: 通过对物理存储器的虚拟存储器访问进行编程的计算机系统使用与其页表条目相关联的多位计数器。 当页面访问者访问页表项时,与该页表项相关联的多位计数器增加1。 计算机操作系统使用不同页表项的多位计数器中的计数来确定可以有效部署大页面的位置。 在具有嵌套寻呼系统的虚拟化计算机系统中,使用与其主页表条目及其嵌套页表项相关联的多位计数器。 这些多位计数器在嵌套页面散播期间递增。 随后,客户操作系统和虚拟机监视器使用适当的多位计数器中的计数来确定可以有效部署大页面的位置。

    Extended page size using aggregated small pages
    35.
    发明授权
    Extended page size using aggregated small pages 有权
    使用聚合小页面扩展页面大小

    公开(公告)号:US08195917B2

    公开(公告)日:2012-06-05

    申请号:US12496335

    申请日:2009-07-01

    CPC classification number: G06F12/1009 G06F12/1027 G06F2212/652

    Abstract: A processor including a virtual memory paging mechanism. The virtual memory paging mechanism enables an operating system operating on the processor to use pages of a first size and a second size, the second size being greater than the first size. The mechanism further enables the operating system to use superpages including two or more contiguous pages of the first size. The size of a superpage is less than the second size. The processor further includes a page table having a separate entry for each of the pages included in each superpage. The operating system accesses each superpage using a single virtual address. The mechanism interprets a single entry in a translation lookaside buffer TLB as referring to a region of memory comprising a set of pages that correspond to a superpage in response to detecting a superpage enable indicator associated with the entry in the TLB is asserted.

    Abstract translation: 一种包括虚拟存储器寻呼机构的处理器。 虚拟存储器分页机构使得在处理器上操作的操作系统能够使用具有第一尺寸和第二尺寸的页面,第二尺寸大于第一尺寸。 该机制进一步使得操作系统能够使用包括第一尺寸的两个或更多个连续页面的超级页面。 超级页面的大小小于第二个大小。 处理器还包括页表,其具有用于每个超级页面中包括的每个页面的单独条目。 操作系统使用单个虚拟地址访问每个超级页面。 该机制将翻译后备缓存器TLB中的单个条目解释为响应于检测到与TLB中的条目相关联的超级页面使能指示符被断言的引用包括与超级页面相对应的页面的存储器区域。

    TRANSLATION LOOKASIDE BUFFER
    36.
    发明申请
    TRANSLATION LOOKASIDE BUFFER 有权
    翻译LOOKASIDE BUFFER

    公开(公告)号:US20120066475A1

    公开(公告)日:2012-03-15

    申请号:US13298800

    申请日:2011-11-17

    CPC classification number: G06F12/1027 G06F2212/652

    Abstract: A translation lookaside buffer (TLB) is disclosed formed using RAM and synthesisable logic circuits. The TLB provides logic within the synthesisable logic for pairing down a number of memory locations that must be searched to find a translation to a physical address from a received virtual address. The logic provides a hashing circuit for hashing the received virtual address and uses the hashed virtual address to index the RAM to locate a line within the RAM that provides the translation.

    Abstract translation: 公开了使用RAM和可合成逻辑电路形成的翻译后备缓冲器(TLB)。 TLB提供可合成逻辑内的逻辑,用于将必须搜索的多个存储单元配对以从接收的虚拟地址找到物理地址的转换。 该逻辑提供了用于对接收到的虚拟地址进行散列的散列电路,并使用散列虚拟地址对RAM进行索引,以便在提供翻译的RAM内定位一行。

    Prefetching in a virtual memory system based upon repeated accesses across page boundaries
    37.
    发明授权
    Prefetching in a virtual memory system based upon repeated accesses across page boundaries 失效
    基于跨页面边界的重复访问,在虚拟内存系统中预取

    公开(公告)号:US07958315B2

    公开(公告)日:2011-06-07

    申请号:US12015656

    申请日:2008-01-17

    Abstract: A system and method of improved handling of large pages in a virtual memory system. A data memory management unit (DMMU) detects sequential access of a first sub-page and a second sub-page out of a set of sub-pages that comprise a same large page. Then, the DMMU receives a request for the first sub-page and in response to such a request, the DMMU instructs a pre-fetch engine to pre-fetch at least the second sub-page if the number of detected sequential accesses equals or exceeds a predetermined value.

    Abstract translation: 改进虚拟存储器系统中大页面处理的系统和方法。 数据存储器管理单元(DMMU)检测包括相同大页面的一组子页面中的第一子页面和第二子页面的顺序访问。 然后,DMMU接收对第一子页面的请求,并且响应于这样的请求,如果检测到的顺序访问的数量等于或超过,则DMMU指示预取引擎至少预取第二子页面 预定值。

    Processing System Implementing Variable Page Size Memory Organization Using a Multiple Page Per Entry Translation Lookaside Buffer
    38.
    发明申请
    Processing System Implementing Variable Page Size Memory Organization Using a Multiple Page Per Entry Translation Lookaside Buffer 有权
    处理系统实现可变页面大小内存组织使用多页每个条目翻译后备缓冲区

    公开(公告)号:US20110125983A1

    公开(公告)日:2011-05-26

    申请号:US13018492

    申请日:2011-02-01

    Applicant: Brian Stecher

    Inventor: Brian Stecher

    CPC classification number: G06F12/1036 G06F2212/652

    Abstract: A processing system includes a page table including a plurality of page table entries. Each of the plurality of page table entries includes information for translating a virtual address page to a corresponding physical address page. The processing system also includes a translation lookaside buffer adapted to cache page table information. The processing system also includes memory management software responsive to changes in the page table to consolidate a run of contiguous page table entries into one or more page table entries having a larger memory page size, Y. The memory management software further determines whether the run of contiguous page table entries may be cached in an entry of the translation lookaside buffer that caches multiple page table entries, X, in a single translation lookaside buffer entry.

    Abstract translation: 处理系统包括包括多个页表项的页表。 多个页表条目中的每一个包括用于将虚拟地址页转换到对应的物理地址页的信息。 处理系统还包括适于缓存页表信息的翻译后备缓冲器。 处理系统还包括响应于页表中的变化的存储器管理软件,以将连续页表条目的运行合并到具有较大存储器页大小的一个或多个页表条目中。存储器管理软件还确定是否运行 连续页表条目可以被缓存在翻译后备缓冲器的条目中,该条目在单个翻译后备缓冲器条目中高速缓存多个页表条目X。

    EXTENDED PAGE SIZE USING AGGREGATED SMALL PAGES
    39.
    发明申请
    EXTENDED PAGE SIZE USING AGGREGATED SMALL PAGES 有权
    扩展页尺寸使用聚集小页

    公开(公告)号:US20110004739A1

    公开(公告)日:2011-01-06

    申请号:US12496335

    申请日:2009-07-01

    CPC classification number: G06F12/1009 G06F12/1027 G06F2212/652

    Abstract: A processor including a virtual memory paging mechanism. The virtual memory paging mechanism enables an operating system operating on the processor to use pages of a first size and a second size, the second size being greater than the first size. The mechanism further enables the operating system to use superpages including two or more contiguous pages of the first size. The size of a superpage is less than the second size. The processor further includes a page table having a separate entry for each of the pages included in each superpage. The operating system accesses each superpage using a single virtual address. The mechanism interprets a single entry in a translation lookaside buffer TLB as referring to a region of memory comprising a set of pages that correspond to a superpage in response to detecting a superpage enable indicator associated with the entry in the TLB is asserted.

    Abstract translation: 一种包括虚拟存储器寻呼机构的处理器。 虚拟存储器分页机构使得在处理器上操作的操作系统能够使用具有第一尺寸和第二尺寸的页面,第二尺寸大于第一尺寸。 该机制进一步使得操作系统能够使用包括第一尺寸的两个或更多个连续页面的超级页面。 超级页面的大小小于第二个大小。 处理器还包括页表,其具有用于每个超级页面中包括的每个页面的单独条目。 操作系统使用单个虚拟地址访问每个超级页面。 该机制将翻译后备缓存器TLB中的单个条目解释为响应于检测到与TLB中的条目相关联的超级页面使能指示符被断言的引用包括与超级页面相对应的页面的存储器区域。

    AUTOMATICALLY USING SUPERPAGES FOR STACK MEMORY ALLOCATION
    40.
    发明申请
    AUTOMATICALLY USING SUPERPAGES FOR STACK MEMORY ALLOCATION 审中-公开
    自动使用SUPERPAGES进行堆叠内存分配

    公开(公告)号:US20100332788A1

    公开(公告)日:2010-12-30

    申请号:US12495509

    申请日:2009-06-30

    CPC classification number: G06F12/1027 G06F2212/652

    Abstract: In one embodiment, the present invention includes a page fault handler to create page table entries and TLB entries in response to a page fault, the page fault handler to determine if a page fault resulted from a stack access, to create a superpage table entry if the page fault did result from a stack access, and to create a TLB entry for the superpage. Other embodiments are described and claimed.

    Abstract translation: 在一个实施例中,本发明包括页面错误处理程序,以响应于页面错误来创建页面表项和TLB条目,页面错误处理程序来确定页面错误是否由堆栈访问导致,以创建超级页面表项 页面错误确实来自堆栈访问,并为超级页面创建一个TLB条目。 描述和要求保护其他实施例。

Patent Agency Ranking