Highly Efficient Design of Storage Array Utilizing Multiple Cache Lines for Use in First and Second Cache Spaces and Memory Subsystems
    81.
    发明申请
    Highly Efficient Design of Storage Array Utilizing Multiple Cache Lines for Use in First and Second Cache Spaces and Memory Subsystems 审中-公开
    使用多个高速缓存行的存储阵列的高效设计用于第一和第二高速缓存空间和存储器子系统

    公开(公告)号:US20140237174A1

    公开(公告)日:2014-08-21

    申请号:US14187539

    申请日:2014-02-24

    Abstract: A method of operating a cache memory includes the step of storing a set of data in a first space in a cache memory, a set of data associated with a set of tags. A subset of the set of data is stored in a second space in the cache memory, the subset of the set of data associated with a tag of a subset of the set of tags. The tag portion of an address is compared with the subset of data in the second space in the cache memory in that said subset of data is read when the tag portion of the address and the tag associated with the subset of data match. The tag portion of the address is compared with the set of tags associated with the set of data in the first space in cache memory and the set of data in the first space is read when the tag portion of the address matches one of the sets of tags associated with the set of data in the first space and the tag portion of the address and the tag associated with the subset of data in the second space do not match.

    Abstract translation: 一种操作高速缓冲存储器的方法包括将一组数据存储在高速缓冲存储器中的第一空间中的步骤,即与一组标签相关联的一组数据。 数据集合的子集存储在高速缓冲存储器中的第二空间中,该数据集合的子集与标签集合的子集的标签相关联。 地址的标签部分与高速缓冲存储器中的第二空间中的数据子集进行比较,当地址的标签部分和与数据子集相关联的标签匹配时,读取数据子集。 将地址的标签部分与与高速缓存存储器中的第一空间中的数据集相关联的标签组进行比较,并且当地址的标签部分匹配一组 与第一空间中的数据集和与地址的标签部分和与第二空间中的数据子集相关联的标签相关联的标签不匹配。

    CACHE SET SELECTIVE POWER UP
    83.
    发明申请
    CACHE SET SELECTIVE POWER UP 有权
    CACHE SET选择上电

    公开(公告)号:US20130339596A1

    公开(公告)日:2013-12-19

    申请号:US13524574

    申请日:2012-06-15

    Abstract: Embodiments of the disclosure include selectively powering up a cache set of a multi-set associative cache by receiving an instruction fetch address and determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory. Based on determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory a cache set of the multi-set associative cache that contains a cache line referenced by the instruction fetch address is identified and only powering up a subset of cache. Based on the identified cache set not being powered up, selectively powering up the identified cache set of the multi-set associative cache and transmitting one or more instructions stored in the cache line referenced by the instruction fetch address to a processor.

    Abstract translation: 本公开的实施例包括通过接收指令获取地址并且确定指令获取地址对应于内容可寻址存储器的多个条目之一来选择性地加电多组关联高速缓存的高速缓存组。 基于确定指令获取地址对应于内容可寻址存储器的多个条目中的一个,识别包含由指令获取地址引用的高速缓存行的多组关联高速缓存的高速缓存集,并且仅为子集 的缓存。 基于所识别的未被加电的高速缓存集,选择性地加电多组关联高速缓存的所识别的高速缓存集,并且将由指令提取地址引用的高速缓存行中存储的一个或多个指令发送到处理器。

    METHOD AND APPARATUS FOR SAVING POWER BY EFFICIENTLY DISABLING WAYS FOR A SET-ASSOCIATIVE CACHE
    84.
    发明申请
    METHOD AND APPARATUS FOR SAVING POWER BY EFFICIENTLY DISABLING WAYS FOR A SET-ASSOCIATIVE CACHE 有权
    用于通过有效的方式为一个相关的高速缓存来节省电力的方法和装置

    公开(公告)号:US20130219205A1

    公开(公告)日:2013-08-22

    申请号:US13843885

    申请日:2013-03-15

    Abstract: A method and apparatus for disabling ways of a cache memory in response to history based usage patterns is herein described. Way predicting logic is to keep track of cache accesses to the ways and determine if an access to some ways are to be disabled to save power, based upon way power signals having a logical state representing a predicted miss to the way. One or more counters associated with the ways count accesses, wherein a power signal is set to the logical state representing a predicted miss when one of said one or more counters reaches a saturation value. Control logic adjusts said one or more counters associated with the ways according to the accesses.

    Abstract translation: 这里描述了用于响应于基于历史的使用模式来禁用缓存存储器的方式的方法和装置。 方式预测逻辑是跟踪高速缓存访​​问的方式,并确定是否要禁用某些方式的访问以节省功率,这是基于具有表示预测错过的逻辑状态的功率信号的方式。 与方式计数访问相关联的一个或多个计数器,其中当所述一个或多个计数器之一达到饱和值时,功率信号被设置为表示预测的未命中的逻辑状态。 控制逻辑根据访问方式来调整与一些或多个计数器相关联的方式。

    Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme
    86.
    发明授权
    Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme 有权
    具有省电指令高速缓存方式预测器和指令替换方案的微处理器

    公开(公告)号:US07899993B2

    公开(公告)日:2011-03-01

    申请号:US12421268

    申请日:2009-04-09

    Applicant: Matthias Knoth

    Inventor: Matthias Knoth

    Abstract: Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way set layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way set layer number. If the layer number is equal to the way set layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.

    Abstract translation: 具有省电指令高速缓存方式预测器和指令替换方案的微处理器。 在一个实施例中,处理器包括多路组相关高速缓存,方式预测器,策略计数器和高速缓存补充电路。 策略计数器向预测器提供一种信号,该方式确定预测器在第一模式或第二模式下的运行方式。 在缓存未命中之后,高速缓存补充电路选择高速缓存的方式,并将与方式的数据字段相关联的层号与设置层号的方式进行比较。 如果层号不等于设置层号的方式,则缓存补充电路将数据块写入字段。 如果层号等于设置层号码,则高速缓存补充电路重复上述步骤以获得额外的方式,直到存储器块被写入高速缓存。

    DATA CACHE WAY PREDICTION
    87.
    发明申请
    DATA CACHE WAY PREDICTION 有权
    数据缓存预测

    公开(公告)号:US20100049912A1

    公开(公告)日:2010-02-25

    申请号:US12194936

    申请日:2008-08-20

    Abstract: A microprocessor includes one or more N-way caches and a way prediction logic that selectively enables and disables the cache ways so as to reduce the power consumption. The way prediction logic receives an address and predicts in which one of the cache ways the data associated with the address is likely to be stored. The way prediction logic causes an enabling signal to be supplied only to the way predicted to contain the requested data. The remaining (N−1) of the cache ways do not receive the enabling signal. The power consumed by the cache is thus significantly reduced.

    Abstract translation: 微处理器包括一个或多个N路高速缓存和选择性地启用和禁用高速缓存路径的方式预测逻辑,以便降低功耗。 方式预测逻辑接收一个地址,并预测哪一个高速缓存路径与该地址相关联的数据可能被存储。 预测逻辑的方式使启用信号仅以预测的方式提供以包含所请求的数据。 其余的(N-1)高速缓存路径不接收使能信号。 因此,缓存所消耗的功率显着降低。

    Data Cache Virtual Hint Way Prediction, and Applications Thereof
    88.
    发明申请
    Data Cache Virtual Hint Way Prediction, and Applications Thereof 有权
    数据缓存虚拟提示方式预测及其应用

    公开(公告)号:US20100011166A1

    公开(公告)日:2010-01-14

    申请号:US12563840

    申请日:2009-09-21

    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.

    Abstract translation: 一种基于虚拟提示的数据缓存方式预测方案及其应用。 在一个实施例中,处理器基于虚拟提示值或别名方式预测值从数据高速缓存中检索数据,并且在数据的物理地址可用之前将数据转发到依赖指令。 物理地址可用后,将物理地址与转发数据的物理地址标签值进行比较,以验证转发的数据是正确的数据。 如果转发的数据是正确的数据,则产生命中信号。 如果转发的数据不是正确的数据,则会产生未命中信号。 任何对不正确数据进行操作的指令都将被无效和/或重播。

    Data cache virtual hint way prediction, and applications thereof
    89.
    发明授权
    Data cache virtual hint way prediction, and applications thereof 有权
    数据缓存虚拟提示方式预测及其应用

    公开(公告)号:US07594079B2

    公开(公告)日:2009-09-22

    申请号:US11545706

    申请日:2006-10-11

    Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.

    Abstract translation: 一种基于虚拟提示的数据缓存方式预测方案及其应用。 在一个实施例中,处理器基于虚拟提示值或别名方式预测值从数据高速缓存中检索数据,并且在数据的物理地址可用之前将数据转发到依赖指令。 物理地址可用后,将物理地址与转发数据的物理地址标签值进行比较,以验证转发的数据是正确的数据。 如果转发的数据是正确的数据,则产生命中信号。 如果转发的数据不是正确的数据,则会产生未命中的信号。 任何对不正确数据进行操作的指令都将被无效和/或重播。

    ARITHMETIC PROCESSING APPARATUS FOR EXECUTING INSTRUCTION CODE FETCHED FROM INSTRUCTION CACHE MEMORY
    90.
    发明申请
    ARITHMETIC PROCESSING APPARATUS FOR EXECUTING INSTRUCTION CODE FETCHED FROM INSTRUCTION CACHE MEMORY 审中-公开
    用于执行指令高速缓存存储器中指令代码的算术处理设备

    公开(公告)号:US20090119487A1

    公开(公告)日:2009-05-07

    申请号:US12260269

    申请日:2008-10-29

    Inventor: Soichiro HOSODA

    Abstract: An arithmetic processing apparatus includes a cache block which stores a plurality of instruction codes from a main memory, a central processing unit which fetch-accesses the cache block and sequentially loads and executes the plurality of instruction codes, and a repeat buffer which stores an instruction code group corresponding to a buffer size, the instruction code group ranging from a head instruction code to a terminal instruction code among the head instruction code to an end instruction code of a repeat block repeatedly executed in the processing program, in the plurality of instruction codes stored in the cache block. The arithmetic processing apparatus further includes an instruction cache control unit which performs control so that the instruction code group stored in the repeat buffer is selected and supplied to the central processing unit when the repeat block is repeatedly executed.

    Abstract translation: 算术处理装置包括:存储来自主存储器的多个指令代码的高速缓冲存储器;读取存取高速缓存块并顺序加载并执行多个指令代码的中央处理单元;存储指令的重复缓冲器 对应于缓冲器大小的代码组,指示代码组从头指令代码到头指令代码中的终端指令代码到在处理程序中重复执行的重复块的结束指令代码,在多个指令代码中 存储在缓存块中。 算术处理装置还包括指示高速缓存控制单元,其执行控制,使得当重复执行重复块时,存储在重复缓冲器中的指令代码组被选择并提供给中央处理单元。

Patent Agency Ranking