-
公开(公告)号:US20170046274A1
公开(公告)日:2017-02-16
申请号:US14827255
申请日:2015-08-14
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Gurvinder Singh CHHABRA , Nieyan GENG , John BRENNEN , BalaSubrahmanyam CHINTAMNEEDI
IPC: G06F12/10
CPC classification number: G06F12/1036 , G06F12/023 , G06F12/0253 , G06F12/04 , G06F12/1027 , G06F2212/1044 , G06F2212/50
Abstract: Systems and methods pertain to a method of memory management. Gaps are unused portions of a physical memory in sections of the physical memory mapped to virtual addresses by entries of a translation look-aside buffer (TLB). Sizes and alignment of the sections in the physical memory may be based on the number of entries in the TLB, which leads to the gaps. One or more gaps identified in the physical memory are reclaimed or reused, where the one or more gaps are collected to form a dynamic buffer, by mapping physical addresses of the gaps to virtual addresses of the dynamic buffer.
Abstract translation: 系统和方法涉及内存管理方法。 间隙是通过翻译后备缓冲器(TLB)的条目映射到虚拟地址的物理内存的部分中的物理内存的未使用部分。 物理存储器中的段的大小和对齐可以基于TLB中的条目数量,这导致了间隙。 通过将间隙的物理地址映射到动态缓冲区的虚拟地址,在物理存储器中识别的一个或多个间隙被回收或再利用,其中通过将物理地址的间隔映射到虚拟地址来收集一个或多个间隙以形成动态缓冲器。
-
公开(公告)号:US20170371797A1
公开(公告)日:2017-12-28
申请号:US15192984
申请日:2016-06-24
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Nieyan GENG , Gurvinder Singh CHHABRA , Richard SENIOR , Anand JANAKIRAMAN
IPC: G06F12/0877 , G06F12/0842
CPC classification number: G06F12/0877 , G06F12/023 , G06F12/0842 , G06F12/0855 , G06F12/0886 , G06F2212/1024 , G06F2212/401 , G06F2212/604 , H03M7/30
Abstract: Some aspects of the disclosure relate to a pre-fetch mechanism for a cache line compression system that increases RAM capacity and optimizes overflow area reads. For example, a pre-fetch mechanism may allow the memory controller to pipeline the reads from an area with fixed size slots (main compressed area) and the reads from an overflow area. The overflow area is arranged so that a cache line most likely containing the overflow data for a particular line may be calculated by a decompression engine. In this manner, the cache line decompression engine may fetch, in advance, the overflow area before finding the actual location of the overflow data.
-
3.
公开(公告)号:US20170371792A1
公开(公告)日:2017-12-28
申请号:US15193001
申请日:2016-06-24
Applicant: QUALCOMM Incorporated
Inventor: Andres Alejandro OPORTUS VALENZUELA , Nieyan GENG , Christopher Edward KOOB , Gurvinder Singh CHHABRA , Richard SENIOR , Anand JANAKIRAMAN
IPC: G06F12/0871 , G06F12/0868
CPC classification number: G06F12/0871 , G06F12/02 , G06F12/0802 , G06F12/0868 , G06F2212/1024 , G06F2212/1044 , G06F2212/281 , G06F2212/282 , G06F2212/313 , G06F2212/401 , G06F2212/601 , G06F2212/608
Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time
-
-