Power Aware Padding
    31.
    发明申请
    Power Aware Padding 有权
    电源意识填充

    公开(公告)号:US20160055094A1

    公开(公告)日:2016-02-25

    申请号:US14462773

    申请日:2014-08-19

    Abstract: Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by combining the data with padding data of a size of a difference between a size of a cache line and the data. A processor may determine whether the data, uncompressed or compressed, is smaller than a cache line using a size of the data or a compression ratio of the data. The processor may generate the padding data using constant data values or a pattern of data values. The processor may send a write cache memory access request for the combined data to a cache memory controller, which may write the combined data to a cache memory. The cache memory controller may send a write memory access request to a memory controller, which may write the combined data to a memory.

    Abstract translation: 方面包括计算设备,系统和方法,用于对小于高速缓存线的数据实现高速缓冲存储器访问请求,并且通过将数据与高速缓存行的大小之间的差大小的填充数据组合来消除从主存储器的超时 和数据。 处理器可以使用数据的大小或数据的压缩比来确定未压缩或压缩的数据是否小于高速缓存行。 处理器可以使用恒定的数据值或数据值的模式来生成填充数据。 处理器可以将组合数据的写高速缓存存储器访问请求发送到高速缓冲存储器控制器,高速缓冲存储器控制器可以将组合的数据写入高速缓冲存储器。 高速缓冲存储器控制器可以将写存储器访问请求发送到存储器控制器,存储器控制器可以将组合的数据写入存储器。

    Supplemental Write Cache Command For Bandwidth Compression
    32.
    发明申请
    Supplemental Write Cache Command For Bandwidth Compression 有权
    带宽压缩的补充写入缓存命令

    公开(公告)号:US20160055093A1

    公开(公告)日:2016-02-25

    申请号:US14462763

    申请日:2014-08-19

    Abstract: Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by writing supplemental data to the unfilled portions of the cache line. A cache memory controller may receive a cache memory access request with a supplemental write command for data smaller than a cache line. The cache memory controller may write supplemental to the portions of the cache line not filled by the data in response to a write cache memory access request or a cache miss during a read cache memory access request. In the event of a cache miss, the cache memory controller may retrieve the data from the main memory, excluding any overfetch data, and write the data and the supplemental data to the cache line. Eliminating overfetching reduces bandwidth and power required to retrieved data from main memory.

    Abstract translation: 方面包括用于对小于高速缓存线的数据实现高速缓冲存储器访问请求的计算设备,系统和方法,并且通过将补充数据写入到高速缓存行的未填充部分来消除从主存储器的超时。 高速缓存存储器控制器可以接收具有小于高速缓存线的数据的补充写入命令的高速缓存存储器访问请求。 高速缓冲存储器控制器可以在读取高速缓存存储器访问请求期间响应于写入高速缓存存储器访问请求或高速缓存未命中而对未被数据填充的高速缓存行的部分进行补充。 在高速缓存未命中的情况下,高速缓存存储器控制器可以从主存储器检索数据,排除任何过采取数据,并将数据和补充数据写入高速缓存行。 消除过载减少从主存储器检索数据所需的带宽和功率。

    Cache Line Compaction of Compressed Data Segments

    公开(公告)号:US20160041905A1

    公开(公告)日:2016-02-11

    申请号:US14451639

    申请日:2014-08-05

    Abstract: Methods, devices, and non-transitory process-readable storage media for compacting data within cache lines of a cache. An aspect method may include identifying, by a processor of the computing device, a base address (e.g., a physical or virtual cache address) for a first data segment, identifying a data size (e.g., based on a compression ratio) for the first data segment, obtaining a base offset based on the identified data size and the base address of the first data segment, and calculating an offset address by offsetting the base address with the obtained base offset, wherein the calculated offset address is associated with a second data segment. In some aspects, the method may include identifying a parity value for the first data segment based on the base address and obtaining the base offset by performing a lookup on a stored table using the identified data size and identified parity value.

    SYSTEM AND METHOD FOR CONTROLLING CENTRAL PROCESSING UNIT POWER WITH GUARANTEED TRANSIENT DEADLINES
    35.
    发明申请
    SYSTEM AND METHOD FOR CONTROLLING CENTRAL PROCESSING UNIT POWER WITH GUARANTEED TRANSIENT DEADLINES 有权
    控制中央处理单元功率的系统和方法与保证的瞬态故障

    公开(公告)号:US20130151879A1

    公开(公告)日:2013-06-13

    申请号:US13759709

    申请日:2013-02-05

    Abstract: Methods, systems and devices that include a dynamic clock and voltage scaling (DCVS) solution configured to compute and enforce performance guarantees for a group of processors to ensure that the processors does not remain in a busy state (e.g., due to transient workloads) for a combined period that is more than a predetermined amount of time above that which is required for one of the processors to complete its pre-computed steady state workload. The DCVS may adjust the frequency and/or voltage of one or more of the processors based on a variable delay to ensure that the multiprocessor system only falls behind its steady state workload by, at most, a predefined maximum amount of work, irrespective of the operating frequency or voltage of the processors.

    Abstract translation: 包括动态时钟和电压缩放(DCVS)解决方案的方法,系统和设备,用于为一组处理器计算和实施性能保证,以确保处理器不处于忙状态(例如,由于临时工作负载) 该组合时段大于超过一个处理器完成其预先计算的稳态工作量所需的预定时间量。 DCVS可以基于可变延迟来调整一个或多个处理器的频率和/或电压,以确保多处理器系统仅落后于其稳态工作负载,最多只能达到预定的最大工作量,而不管 处理器的工作频率或电压。

    Cache line compaction of compressed data segments

    公开(公告)号:US10261910B2

    公开(公告)日:2019-04-16

    申请号:US15077534

    申请日:2016-03-22

    Abstract: Methods, devices, and non-transitory process-readable storage media for compacting data within cache lines of a cache. An aspect method may include identifying, by a processor of the computing device, a base address (e.g., a physical or virtual cache address) for a first data segment, identifying a data size (e.g., based on a compression ratio) for the first data segment, obtaining a base offset based on the identified data size and the base address of the first data segment, and calculating an offset address by offsetting the base address with the obtained base offset, wherein the calculated offset address is associated with a second data segment. In some aspects, the method may include identifying a parity value for the first data segment based on the base address and obtaining the base offset by performing a lookup on a stored table using the identified data size and identified parity value.

Patent Agency Ranking