Management of cache size
    21.
    发明授权
    Management of cache size 有权
    管理缓存大小

    公开(公告)号:US09021207B2

    公开(公告)日:2015-04-28

    申请号:US13723093

    申请日:2012-12-20

    Abstract: In response to a processor core exiting a low-power state, a cache is set to a minimum size so that fewer than all of the cache's entries are available to store data, thus reducing the cache's power consumption. Over time, the size of the cache can be increased to account for heightened processor activity, thus ensuring that processing efficiency is not significantly impacted by a reduced cache size. In some embodiments, the cache size is increased based on a measured processor performance metric, such as an eviction rate of the cache. In some embodiments, the cache size is increased at regular intervals until a maximum size is reached.

    Abstract translation: 响应处理器核心退出低功率状态,将高速缓存设置为最小大小,使得少于所有高速缓存的条目可用于存储数据,从而减少高速缓存的功耗。 随着时间的推移,可以增加高速缓存的大小以考虑到处理器活动的增加,从而确保处理效率不受减小的高速缓存大小的显着影响。 在一些实施例中,基于所测量的处理器性能度量(例如高速缓存的逐出速率)来增加高速缓存大小。 在一些实施例中,高速缓存大小以规则的间隔增加,直到达到最大大小。

    SPECIALIZED MEMORY DISAMBIGUATION MECHANISMS FOR DIFFERENT MEMORY READ ACCESS TYPES
    22.
    发明申请
    SPECIALIZED MEMORY DISAMBIGUATION MECHANISMS FOR DIFFERENT MEMORY READ ACCESS TYPES 有权
    专门针对不同内存读取存取格式的存储器分配机制

    公开(公告)号:US20150067305A1

    公开(公告)日:2015-03-05

    申请号:US14015282

    申请日:2013-08-30

    Abstract: A system and method for efficient predicting and processing of memory access dependencies. A computing system includes control logic that marks a detected load instruction as a first type responsive to predicting the load instruction has high locality and is a candidate for store-to-load (STL) data forwarding. The control logic marks the detected load instruction as a second type responsive to predicting the load instruction has low locality and is not a candidate for STL data forwarding. The control logic processes a load instruction marked as the first type as if the load instruction is dependent on an older store operation. The control logic processes a load instruction marked as the second type as if the load instruction is independent on any older store operation.

    Abstract translation: 一种用于有效预测和处理内存访问依赖关系的系统和方法。 计算系统包括控制逻辑,其将检测到的加载指令标记为响应于预测加载指令具有高局部性并且是存储到加载(STL)数据转发的候选者的第一类型。 控制逻辑将检测到的加载指令标记为响应于预测加载指令具有低局部性而不是STL数据转发的候选的第二类型。 控制逻辑处理标记为第一类型的加载指令,就像加载指令取决于较旧的存储操作一样。 控制逻辑处理标记为第二类型的加载指令,就像加载指令独立于任何较旧的存储操作一样。

    METHOD AND APPARATUS FOR MEMORY MANAGEMENT
    23.
    发明申请
    METHOD AND APPARATUS FOR MEMORY MANAGEMENT 审中-公开
    用于记忆管理的方法和装置

    公开(公告)号:US20150067264A1

    公开(公告)日:2015-03-05

    申请号:US14012475

    申请日:2013-08-28

    CPC classification number: G06F12/126 Y02D10/13

    Abstract: In some embodiments, a method of managing cache memory includes identifying a group of cache lines in a cache memory, based on a correlation between the cache lines. The method also includes tracking evictions of cache lines in the group from the cache memory and, in response to a determination that a criterion regarding eviction of cache lines in the group from the cache memory is satisfied, selecting one or more (e.g., all) remaining cache lines in the group for eviction.

    Abstract translation: 在一些实施例中,管理高速缓存存储器的方法包括基于高速缓存行之间的相关性来识别高速缓存存储器中的一组高速缓存行。 该方法还包括跟踪来自高速缓存存储器的组中的高速缓存行的移除,并且响应于确定与高速缓冲存储器中的组中的高速缓存行的逐出的标准被选择一个或多个(例如全部) 组中的剩余高速缓存行被驱逐。

    SYSTEMS AND METHODS FOR DISABLING FAULTY CORES USING PROXY VIRTUAL MACHINES

    公开(公告)号:US20240152434A1

    公开(公告)日:2024-05-09

    申请号:US18502941

    申请日:2023-11-06

    Inventor: Srilatha Manne

    CPC classification number: G06F11/2023

    Abstract: A device for disabling faulty cores using proxy virtual machines includes a processor, a faulty core, and a physical memory. The processor is responsible for executing a hypervisor that is configured to assign a proxy virtual machine to the faulty core. The assigned proxy virtual machine also includes a minimal workload. Various other methods, systems, and computer-readable media are also disclosed.

    Batching modified blocks to the same dram page
    26.
    发明授权
    Batching modified blocks to the same dram page 有权
    将修改的块批处理到同一个戏剧页面

    公开(公告)号:US09529718B2

    公开(公告)日:2016-12-27

    申请号:US14569175

    申请日:2014-12-12

    Abstract: To efficiently transfer of data from a cache to a memory, it is desirable that more data corresponding to the same page in the memory be loaded in a line buffer. Writing data to a memory page that is not currently loaded in a row buffer requires closing an old page and opening a new page. Both operations consume energy and clock cycles and potentially delay more critical memory read requests. Hence it is desirable to have more than one write going to the same DRAM page to amortize the cost of opening and closing DRAM pages. A desirable approach is batch write backs to the same DRAM page by retaining modified blocks in the cache until a sufficient number of modified blocks belonging to the same memory page are ready for write backs.

    Abstract translation: 为了有效地将数据从高速缓存传输到存储器,期望将与存储器中的相同页面相对应的更多数据加载到行缓冲器中。 将数据写入当前未加载到行缓冲区的内存页面时,需要关闭旧页面并打开新页面。 两种操作都消耗能量和时钟周期,并可能延迟更多关键的存储器读取请求。 因此,期望具有多于一个写入同一DRAM页面的写入以分摊打开和关闭DRAM页面的成本。 期望的方法是通过将修改的块保留在高速缓存中来批量回写到相同的DRAM页面,直到属于同一存储器页面的足够数量的修改的块准备好回写。

    METHODS AND SYSTEMS OF SYNCHRONIZER SELECTION
    27.
    发明申请
    METHODS AND SYSTEMS OF SYNCHRONIZER SELECTION 有权
    同步选择方法与系统

    公开(公告)号:US20150188649A1

    公开(公告)日:2015-07-02

    申请号:US14146654

    申请日:2014-01-02

    Abstract: A circuit includes a plurality of synchronizers to adapt a signal from a first clock domain to a second clock domain. Each synchronizer of the plurality of synchronizers includes a synchronizer input to receive the signal from the first clock domain and a synchronizer output to provide the signal as adapted to the second clock domain. The circuit also includes a multiplexer (mux) that includes a plurality of mux inputs and a mux output. Each mux input is coupled to the synchronizer output of a respective synchronizer of the plurality of synchronizers. The mux output provides the signal, as adapted to the second clock domain, from the synchronizer output of a selected synchronizer of the plurality of synchronizers.

    Abstract translation: 电路包括多个同步器,用于将来自第一时钟域的信号适配到第二时钟域。 多个同步器的每个同步器包括用于接收来自第一时钟域的信号的同步器输入和同步器输出以提供适合于第二时钟域的信号。 该电路还包括多路复用器(多路复用器),其包括多个多路复用器输入和多路复用器输出。 每个多路复用器输入耦合到多个同步器的相应同步器的同步器输出端。 多路复用器输出从多个同步器的所选同步器的同步器输出提供适应于第二时钟域的信号。

    MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES
    28.
    发明申请
    MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES 有权
    在缓存中具有特定属性的高速缓存块的存在机制

    公开(公告)号:US20140181414A1

    公开(公告)日:2014-06-26

    申请号:US14055869

    申请日:2013-10-16

    Abstract: A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, wherein a first bank is powered down. In response a write request to a second bank for data indicated to be stored in the powered down first bank, the cache controller determines a respective bypass condition for the data. If the bypass condition exceeds a threshold, then the cache controller invalidates any copy of the data stored in the second bank. If the bypass condition does not exceed the threshold, then the cache controller stores the data with a clean state in the second bank. The cache controller writes the data in a lower-level memory for both cases.

    Abstract translation: 一种用于有效地限制高速缓冲存储器中具有特定属性的数据的存储空间的系统和方法。 计算系统包括高速缓存阵列和对应的高速缓存控制器。 高速缓存阵列包括多个存储体,其中第一存储体断电。 作为响应,向第二组写入请求,指示存储在掉电第一存储体中的数据,高速缓存控制器确定数据的相应旁路条件。 如果旁路条件超过阈值,则高速缓存控制器使存储在第二组中的数据的任何副本无效。 如果旁路条件不超过阈值,则高速缓存控制器将具有干净状态的数据存储在第二存储体中。 高速缓存控制器将这些数据写入较低级别的内存。

    SPILL DATA MANAGEMENT
    29.
    发明申请
    SPILL DATA MANAGEMENT 审中-公开
    泄漏数据管理

    公开(公告)号:US20140164708A1

    公开(公告)日:2014-06-12

    申请号:US13708090

    申请日:2012-12-07

    CPC classification number: G06F12/0875 G06F12/0891 G06F12/123 Y02D10/13

    Abstract: A processor discards spill data from a memory hierarchy in response to the final access to the spill data has been performed by a compiled program executing at the processor. In some embodiments, the final access determined based on a special-purpose load instruction configured for this purpose. In some embodiments the determination is made based on the location of a stack pointer indicating that a method of the executing program has returned, so that data of the returned method that remains in the stack frame is no longer to be accessed. Because the spill data is discarded after the final access, it is not transferred through the memory hierarchy.

    Abstract translation: 响应于对处理器执行的编译程序已经执行对溢出数据的最终访问,处理器从存储器层次结构中丢弃溢出数据。 在一些实施例中,基于为此目的配置的专用加载指令确定最终访问。 在一些实施例中,基于指示执行程序的方法已经返回的堆栈指针的位置进行确定,使得保留在堆栈帧中的返回的方法的数据不再被访问。 由于溢出数据在最终访问后被丢弃,因此不会通过内存层次结构传输。

Patent Agency Ranking