Abstract:
Systems and methods that allow for Dynamic Clock and Voltage Scaling (DCVS) aware interprocessor communications among processors such as those used in or with a portable computing device (“PCD”) are presented. During operation of the PCD at least one data packet is received at a first processing component. Additionally, the first processing component also receives workload information about a second processing component operating under dynamic clock and voltage scaling (DCVS). A determination is made, based at least in part on the received workload information, whether to send the at least one data packet from the first processing component to the second processing component or to a buffer, providing a cost effective ability to reduce power consumption and improved battery life in PCDs with multi-cores or multi-CPUs implementing DCVS algorithms or logic.
Abstract:
Systems and methods for improved implementation of low power modes in a multi-core system-on-a-chip (SoC) are presented. A cache memory of the multi-core SoC not being accessed by other components of the SoC is identified and a number of dirty cache lines present in the cache memory is determined. For a low power mode of the core, an entry latency based on the number of dirty cache lines is determined, and an exit latency is determined. An entry power cost for the low power mode is also determined based on the number of dirty cache lines A determination is made whether the low power mode for the cache memory results in a power savings over an active mode for the cache memory based at least on the entry power cost and the entry latency of the cache memory entering the first power mode.
Abstract:
A dynamic cache extension in a multi-cluster heterogeneous processor architecture is described. One embodiment is a system comprising a first processor cluster having a first level two (L2) cache and a second processor cluster having a second L2 cache. The system further comprises a controller in communication with the first and second L2 caches. The controller receives a processor workload input and a cache workload input from the first processor cluster. Based on processor workload input and the cache workload input, the cache controller determines whether a current task associated with the first processor cluster is limited by a size threshold of the first L2 cache or a performance threshold of the first processor cluster. If the current task is limited by the size threshold of the first L2 cache, the controller uses at least a portion of the second L2 cache as an extension of the first L2 cache.
Abstract:
Performance-hint-driven dynamic resource management, including: receiving workload requirements and sensor inputs of a system; determining a new allocation for resources of the system; reconfiguring the resources of the system using the new allocation; evaluating performance of the system based on the reconfigured resources of the system; and generating performance hints based on the evaluated performance of the system.
Abstract:
Systems and methods that allow for Dynamic Clock and Voltage Scaling (DCVS) aware interprocessor communications among processors such as those used in or with a portable computing device (“PCD”) are presented. During operation of the PCD at least one data packet is received at a first processing component. Additionally, the first processing component also receives workload information about a second processing component operating under dynamic clock and voltage scaling (DCVS). A determination is made, based at least in part on the received workload information, whether to send the at least one data packet from the first processing component to the second processing component or to a buffer, providing a cost effective ability to reduce power consumption and improved battery life in PCDs with multi-cores or multi-CPUs implementing DCVS algorithms or logic.
Abstract:
Systems and methods that allow for dynamic quality of service (QoS) levels for an application processor in a multi-core on-chip system (SoC) in a portable computing device (PCD) are presented. During operation of the PCD an operational load of a co-processor of the SoC is determined, where the co-processor is in communication with an application processor of the SoC. Based on the determined load, the co-processor determines a QoS level required from the application processor. The QoS level is communicated to the application processor. The application processor determines whether it can implement power optimization measures, such as entering into a low power mode (LPM), based at least in part on the dynamically communicated QoS level from the co-processor. The present disclosure provides a cost effective ability to reduce power consumption in PCDs implementing one or more cores or CPUs that are dependent upon the application processor.