CACHE MEMORY BUDGETED BY CHUNKS BASED ON MEMORY ACCESS TYPE
    3.
    发明申请
    CACHE MEMORY BUDGETED BY CHUNKS BASED ON MEMORY ACCESS TYPE 有权
    基于存储器访问类型的CHUNKS缓存的缓存记忆

    公开(公告)号:US20160350227A1

    公开(公告)日:2016-12-01

    申请号:US14890898

    申请日:2014-12-14

    Abstract: A set associative cache memory, comprising: an array of storage elements arranged as M sets by N ways, each set belongs in one of L mutually exclusive groups; an allocation unit allocates the storage elements in response to memory accesses that miss in the cache; each memory access has an associated memory access type (MAT) of a plurality of predetermined MAT; a mapping, for each group of the L mutually exclusive groups: for each MAT, associates the MAT with a subset of the N ways; and for each memory access, the allocation unit allocates into a way of the subset of ways that the mapping associates with the MAT of the memory access and with one of the L mutually exclusive groups in which the selected set belongs.

    Abstract translation: 一种集合关联高速缓冲存储器,包括:通过N个方式排列成M组的存储元件的阵列,每个集合属于L个互斥组之一; 分配单元响应于在高速缓存中错过的存储器访问来分配存储元件; 每个存储器访问具有多个预定MAT的相关联的存储器访问类型(MAT); 映射,对于每组L互斥组:对于每个MAT,将MAT与N个子集的子集相关联; 并且对于每个存储器访问,分配单元分配到映射与存储器访问的MAT相关联的方式的子集以及所选集合所属的L个互斥组之一的方式。

    System and method for high performance, power efficient store buffer forwarding
    4.
    发明授权
    System and method for high performance, power efficient store buffer forwarding 有权
    用于高性能,高效能存储缓冲区转发的系统和方法

    公开(公告)号:US08775740B2

    公开(公告)日:2014-07-08

    申请号:US11214501

    申请日:2005-08-30

    Abstract: The present disclosure describes a system and method for high performance, power efficient store buffer forwarding. Some illustrative embodiments may include a system, comprising: a processor coupled to an address bus; a cache memory that couples to the address bus and comprises cache data (the cache memory divided into a plurality of ways); and a store buffer that couples to the address bus, and comprises store buffer data, a store buffer way and a store buffer index. The processor selects the store buffer data for use by a data load operation if a selected way of the plurality of ways matches the store buffer way, and if at least part of the bus address matches the store buffer index.

    Abstract translation: 本公开描述了用于高性能,功率效率的存储缓冲器转发的系统和方法。 一些说明性实施例可以包括系统,包括:耦合到地址总线的处理器; 缓存存储器,其耦合到地址总线并且包括高速缓存数据(被分成多个方式的高速缓冲存储器); 以及存储缓冲器,其耦合到地址总线,并且包括存储缓冲器数据,存储缓冲器方式和存储缓冲器索引。 如果多个方式的选定方式与存储缓冲器方式相匹配,并且总线地址的至少一部分与存储缓冲器索引匹配,则处理器选择存储缓冲器数据以供数据加载操作使用。

    MEMORY DEVICE, PROCESSOR, AND CACHE MEMORY CONTROL METHOD
    5.
    发明申请
    MEMORY DEVICE, PROCESSOR, AND CACHE MEMORY CONTROL METHOD 审中-公开
    存储器件,处理器和高速缓存存储器控制方法

    公开(公告)号:US20140115264A1

    公开(公告)日:2014-04-24

    申请号:US14018464

    申请日:2013-09-05

    Inventor: Yuji SHIRAHIGE

    Abstract: A memory device includes a plurality of ways; a register configured to hold an access history of accessing the plurality of ways; and a way control unit configured to select one or more ways among the plurality of ways according to an access request and the access history, put the selected one or more ways in an operation state, and put one or more of the plurality of ways other than the selected one or more ways in a non-operation state. The way control unit dynamically changes a number of the one or more ways to be selected, according to the access request.

    Abstract translation: 存储装置包括多个方式; 寄存器,被配置为保存访问所述多个路径的访问历史; 以及路径控制单元,被配置为根据访问请求和访问历史来选择所述多个路径中的一个或多个路径,将所选择的一个或多个路径置于操作状态,并且将所述多个方式中的一个或多个进行其他 比选择的一种或多种处于非操作状态的方式。 控制单元根据访问请求动态地改变要选择的一种或多种方式的数量。

    Reducing energy consumption of set associative caches by reducing checked ways of the set association
    6.
    发明授权
    Reducing energy consumption of set associative caches by reducing checked ways of the set association 失效
    通过减少集合关联的检查方式来减少集合关联缓存的能量消耗

    公开(公告)号:US08341355B2

    公开(公告)日:2012-12-25

    申请号:US12787122

    申请日:2010-05-25

    Abstract: Mechanisms for accessing a set associative cache of a data processing system are provided. A set of cache lines, in the set associative cache, associated with an address of a request are identified. Based on a determined mode of operation for the set, the following may be performed: determining if a cache hit occurs in a preferred cache line without accessing other cache lines in the set of cache lines; retrieving data from the preferred cache line without accessing the other cache lines in the set of cache lines, if it is determined that there is a cache hit in the preferred cache line; and accessing each of the other cache lines in the set of cache lines to determine if there is a cache hit in any of these other cache lines only in response to there being a cache miss in the preferred cache line(s).

    Abstract translation: 提供了访问数据处理系统的集合关联缓存的机制。 识别与集合关联高速缓存中的与请求的地址相关联的一组高速缓存行。 基于针对集合的确定的操作模式,可以执行以下操作:确定高速缓存命中是否发生在优选高速缓存行中,而不访问该组高速缓存行中的其他高速缓存行; 如果确定在所述优选高速缓存行中存在高速缓存命中,则从所述优选高速缓存行中检索数据而不访问所述一组高速缓存行中的其它高速缓存行; 以及访问该组高速缓存行中的每个其它高速缓存行,以仅在响应于优选高速缓存行中存在高速缓存未命中时确定在这些其它高速缓存行中的任何一个中是否存在高速缓存命中。

    INDEX GENERATION FOR CACHE MEMORIES
    7.
    发明申请
    INDEX GENERATION FOR CACHE MEMORIES 有权
    高速缓存记录的索引生成

    公开(公告)号:US20120166756A1

    公开(公告)日:2012-06-28

    申请号:US13402796

    申请日:2012-02-22

    CPC classification number: G06F12/0864 G06F2212/6082

    Abstract: Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory.

    Abstract translation: 本发明的实施例提供一种生成高速缓冲存储器的索引的系统。 系统通过接收访问高速缓冲存储器的请求开始,其中请求包括地址信息。 然后系统获得与请求相关联的非地址信息。 接下来,系统使用地址信息和非地址信息来生成索引。 然后,系统使用索引来实现对高速缓存的访问。

    Cache system and control method of way prediction for cache memory
    8.
    发明申请
    Cache system and control method of way prediction for cache memory 审中-公开
    缓存系统及其缓存方式预测方法

    公开(公告)号:US20110072215A1

    公开(公告)日:2011-03-24

    申请号:US12923385

    申请日:2010-09-17

    Abstract: A cache device according to an exemplary aspect of the present invention includes a way information buffer that stores way information that is a result of selecting a way in an instruction that accesses a cache memory; and a control unit that controls a storage processing and a read processing, while a series of instruction groups are repeatedly executed, the storage processing being for storing the way information in the instruction groups to the way information memory, the read processing being for reading the way information from the way information memory.

    Abstract translation: 根据本发明的示例性方面的缓存设备包括:路径信息缓冲器,其存储作为访问高速缓冲存储器的指令中的方式的结果的路径信息; 以及控制单元,其控制存储处理和读取处理,同时重复执行一系列指令组,所述存储处理用于将指令组中的路径信息存储到信息存储器中,读取处理用于读取 从信息记忆的方式信息。

    Methods for reducing data cache access power in a processor using way selection bits
    9.
    发明授权
    Methods for reducing data cache access power in a processor using way selection bits 有权
    使用方式选择位减少处理器中数据高速缓存存取功率的方法

    公开(公告)号:US07657708B2

    公开(公告)日:2010-02-02

    申请号:US11505869

    申请日:2006-08-18

    Abstract: Methods for reducing data cache access power in a processor. In an embodiment, a micro tag array is used to store base address or base register data bits, offset data bits, a carry bit, and way selection data bits associated with cache accesses. When a LOAD or a STORE instruction is fetched, at least a portion of the base address and at least a portion of the offset of the instruction are compared to data stored in the micro tag array. If a micro tag array hit occurs, the micro tag array generates a cache dataram enable signal. This signal activates only the cache dataram that stores the needed data.

    Abstract translation: 降低处理器中数据高速缓存存取功率的方法。 在一个实施例中,微标签阵列用于存储与缓存访问相关联的基地址或基址寄存器数据位,偏移数据位,进位位和方式选择数据位。 当取出LOAD或STORE指令时,将基地址的至少一部分和指令的偏移的至少一部分与存储在微标签阵列中的数据进行比较。 如果发生微型标签阵列命中,则微型标签阵列产生高速缓存数据库启用信号。 该信号仅激活存储所需数据的缓存数据库。

    Micro tag array having way selection bits for reducing data cache access power
    10.
    发明授权
    Micro tag array having way selection bits for reducing data cache access power 有权
    微标签阵列具有减少数据高速缓存存取功率的方式选择位

    公开(公告)号:US07650465B2

    公开(公告)日:2010-01-19

    申请号:US11505865

    申请日:2006-08-18

    Abstract: Processors and systems having a micro tag array that reduces data cache access power. The processors and systems include a cache that has a plurality of datarams, a processor pipeline register, and a micro tag array. The micro tag array is coupled to the cache and the processor pipeline register. The micro tag array stores base address data bits or base register data bits, offset data bits, a carry bit, and way selection data bits. When a LOAD or a STORE instruction is fetched, at least a portion of the base address and at least a portion of the offset of the instruction are compared to data stored in the micro tag array. If a micro tag array hit occurs, the micro tag array generates a cache dataram enable signal. This signal enables only a single dataram of the cache.

    Abstract translation: 具有减少数据高速缓存存取功率的微型标签阵列的处理器和系统。 处理器和系统包括具有多个数据库的缓存,处理器流水线寄存器和微标签阵列。 微型标签阵列耦合到高速缓存和处理器流水线寄存器。 微标签阵列存储基地址数据位或基址寄存器数据位,偏移数据位,进位位和方式选择数据位。 当取出LOAD或STORE指令时,将基地址的至少一部分和指令的偏移的至少一部分与存储在微标签阵列中的数据进行比较。 如果发生微型标签阵列命中,则微型标签阵列产生高速缓存数据库启用信号。 此信号只能启用缓存的单个dataram。

Patent Agency Ranking