DRAM COMPRESSION SCHEME TO REDUCE POWER CONSUMPTION IN MOTION COMPENSATION AND DISPLAY REFRESH
    1.
    发明申请
    DRAM COMPRESSION SCHEME TO REDUCE POWER CONSUMPTION IN MOTION COMPENSATION AND DISPLAY REFRESH 有权
    DRAM压缩方案,以减少运动补偿和显示刷新消耗电力

    公开(公告)号:US20140204105A1

    公开(公告)日:2014-07-24

    申请号:US13995575

    申请日:2011-12-21

    摘要: Systems and methods of operating a memory controller may provide for receiving a write request from a motion compensation module, wherein the write request includes video data. A compression of the video data may be conducted to obtain compressed data, wherein the compression of the video data is transparent to the motion compensation module. In addition, the compressed data can be stored to one or more memory chips. Moreover, a read request may be received, wherein stored data is retrieved from at least one of the one or more memory chips in response to the request. Additionally, a decompression of the stored data may be conducted to obtain decompressed data.

    摘要翻译: 操作存储器控制器的系统和方法可以提供从运动补偿模块接收写入请求,其中写入请求包括视频数据。 可以进行视频数据的压缩以获得压缩数据,其中视频数据的压缩对运动补偿模块是透明的。 此外,压缩数据可以存储到一个或多个存储器芯片。 此外,可以接收读取请求,其中响应于该请求从所述一个或多个存储器芯片中的至少一个检索存储的数据。 此外,可以进行存储数据的解压缩以获得解压缩的数据。

    EFFICIENT LOCKING OF MEMORY PAGES
    2.
    发明申请
    EFFICIENT LOCKING OF MEMORY PAGES 有权
    高效锁定内存页

    公开(公告)号:US20130311738A1

    公开(公告)日:2013-11-21

    申请号:US13996438

    申请日:2012-03-30

    IPC分类号: G06F12/14

    摘要: An apparatus is described that contains a processing core comprising a CPU core and at least one accelerator coupled to the CPU core. The CPU core comprises a pipeline having a translation look aside buffer. The CPU core comprising logic circuitry to set a lock bit in attribute data of an entry within the translation look-aside buffer entry to lock a page of memory reserved for the accelerator.

    摘要翻译: 描述了一种装置,其包含处理核心,其包括CPU核心和耦合到CPU核心的至少一个加速器。 CPU核心包括具有翻译旁边缓冲器的管线。 CPU核心包括逻辑电路,用于在转换后备缓冲器条目中的条目的属性数据中设置锁定位,以锁定为加速器保留的存储器页面。

    Scheduling Workloads Based On Cache Asymmetry
    3.
    发明申请
    Scheduling Workloads Based On Cache Asymmetry 有权
    基于缓存不对称的调度工作负载

    公开(公告)号:US20120233393A1

    公开(公告)日:2012-09-13

    申请号:US13042547

    申请日:2011-03-08

    IPC分类号: G06F12/06 G06F12/08

    摘要: In one embodiment, a processor includes a first cache and a second cache, a first core associated with the first cache and a second core associated with the second cache. The caches are of asymmetric sizes, and a scheduler can intelligently schedule threads to the cores based at least in part on awareness of this asymmetry and resulting cache performance information obtained during a training phase of at least one of the threads.

    摘要翻译: 在一个实施例中,处理器包括第一高速缓存和第二高速缓存,与第一高速缓存相关联的第一核和与第二高速缓存相关联的第二核。 高速缓存具有非对称尺寸,并且调度器可以至少部分地基于对至少一个线程的训练阶段期间获得的不对称性和结果高速缓存性能信息的认识来智能地将线程调度到核心。

    Embedded Branch Prediction Unit
    6.
    发明申请
    Embedded Branch Prediction Unit 有权
    嵌入式分支预测单元

    公开(公告)号:US20140019736A1

    公开(公告)日:2014-01-16

    申请号:US13992723

    申请日:2011-12-30

    IPC分类号: G06F9/38

    CPC分类号: G06F9/3806 G06F9/30058

    摘要: In accordance with some embodiments of the present invention, a branch prediction unit for an embedded controller may be placed in association with the instruction fetch unit instead of the decode stage. In addition, the branch prediction unit may include no branch predictor. Also, the return address stack may be associated with the instruction decode stage and is structurally separate from the branch prediction unit. In some cases, this arrangement reduces the area of the branch prediction unit, as well as power consumption.

    摘要翻译: 根据本发明的一些实施例,用于嵌入式控制器的分支预测单元可以与指令提取单元相关联而不是解码级放置。 另外,分支预测单元也可以不包括分支预测器。 此外,返回地址堆栈可以与指令解码级相关联,并且在结构上与分支预测单元分离。 在某些情况下,这种布置减少了分支预测单元的面积以及功耗。

    Scheduling workloads based on cache asymmetry
    7.
    发明授权
    Scheduling workloads based on cache asymmetry 有权
    基于缓存不对称调度工作负载

    公开(公告)号:US08898390B2

    公开(公告)日:2014-11-25

    申请号:US13042547

    申请日:2011-03-08

    摘要: In one embodiment, a processor includes a first cache and a second cache, a first core associated with the first cache and a second core associated with the second cache. The caches are of asymmetric sizes, and a scheduler can intelligently schedule threads to the cores based at least in part on awareness of this asymmetry and resulting cache performance information obtained during a training phase of at least one of the threads.

    摘要翻译: 在一个实施例中,处理器包括第一高速缓存和第二高速缓存,与第一高速缓存相关联的第一核和与第二高速缓存相关联的第二核。 高速缓存具有非对称尺寸,并且调度器可以至少部分地基于对至少一个线程的训练阶段期间获得的不对称性和结果高速缓存性能信息的认识来智能地将线程调度到核心。

    Dram compression scheme to reduce power consumption in motion compensation and display refresh
    9.
    发明授权
    Dram compression scheme to reduce power consumption in motion compensation and display refresh 有权
    Dram压缩方案,以减少运动补偿和显示刷新的功耗

    公开(公告)号:US09268723B2

    公开(公告)日:2016-02-23

    申请号:US13995575

    申请日:2011-12-21

    摘要: Systems and methods of operating a memory controller may provide for receiving a write request from a motion compensation module, wherein the write request includes video data. A compression of the video data may be conducted to obtain compressed data, wherein the compression of the video data is transparent to the motion compensation module. In addition, the compressed data can be stored to one or more memory chips. Moreover, a read request may be received, wherein stored data is retrieved from at least one of the one or more memory chips in response to the request. Additionally, a decompression of the stored data may be conducted to obtain decompressed data.

    摘要翻译: 操作存储器控制器的系统和方法可以提供从运动补偿模块接收写入请求,其中写入请求包括视频数据。 可以进行视频数据的压缩以获得压缩数据,其中视频数据的压缩对运动补偿模块是透明的。 此外,压缩数据可以存储到一个或多个存储器芯片。 此外,可以接收读取请求,其中响应于该请求从所述一个或多个存储器芯片中的至少一个检索存储的数据。 此外,可以进行存储数据的解压缩以获得解压缩的数据。