Abstract:
Systems and methods are disclosed for conserving power consumption in a memory system. One such system comprises a system on chip (SoC) and an encoder. The SoC comprises one or more memory clients for accessing a dynamic random access memory (DRAM) memory system coupled to the SoC. The encoder resides on the SoC and is configured to reduce a data activity factor of memory data received from the memory clients by encoding the received memory data according to a compression scheme and providing the encoded memory data to the DRAM memory system. The DRAM memory system is configured to decode the encoded memory data according to the compression scheme into the received memory data.
Abstract:
Systems and methods are disclosed for reducing memory I/O power. One embodiment is a system comprising a system on chip (SoC), a DRAM memory device, and a data masking power reduction module. The SoC comprises a memory controller. The DRAM memory device is coupled to the memory controller via a plurality of DQ pins. The data masking power reduction module comprises logic configured to drive the DQ pins to a power saving state during a data masking operation.
Abstract:
Various embodiments of methods and systems for hardware (“HW”) based dynamic memory management in a portable computing device (“PCD”) are disclosed. One exemplary method includes generating a lookup table (“LUT”) to track each memory page located across multiple portions of a volatile memory. The records in the LUT are updated to keep track of data locations. When the PCD enters a sleep state to conserve energy, the LUT may be queried to determine which specific memory pages in a first portion of volatile memory (e.g., an upper bank) contain data content and which pages in a second portion of volatile memory (e.g., a lower bank) are available for receipt of content. Based on the query, the location of the data in the memory pages of the upper bank is known and can be quickly migrated to memory pages in the lower bank which are identified for receipt of the data.
Abstract:
Systems and methods are disclosed for providing memory channel interleaving with selective power or performance optimization. One such method involves configuring a memory address map for two or more memory devices accessed via two or more respective memory channels with an interleaved region and a linear region. The interleaved region comprises an interleaved address space for relatively higher performance use cases. The linear region comprises a linear address space for relatively lower power use cases. Memory requests are received from one or more clients. The memory requests comprise a preference for power savings or performance. Received memory requests are assigned to the linear region or the interleaved region according to the preference for power savings or performance.
Abstract:
Systems and methods are disclosed for conserving power consumption in a memory system. One such system comprises a DRAM memory system and a system on chip (SoC). The SoC is coupled to the DRAM memory system via a memory bus. The SoC comprises one or more memory controllers for processing memory requests from one or more memory clients for accessing the DRAM memory system. The one or more memory controllers are configured to selectively conserve memory power consumption by dynamically resizing a bus width of the memory bus.
Abstract:
Systems and method are directed to reducing power consumption of data transfer between a processor and a memory. A data to be transferred on a data bus between the processor and the memory is checked for a first data pattern, and if the first data pattern is present, transfer of the first data pattern is suppressed on the data bus. Instead, a first address corresponding to the first data pattern is transferred on a second bus between the processor and the memory. The first address is smaller than the first data pattern. The processor comprises a processor-side first-in-first-out (FIFO) and the memory comprises a memory-side FIFO, wherein the first data pattern is present at the first address in the processor-side FIFO and at the first address in the memory-side FIFO.
Abstract:
Aspects include computing devices, systems, and methods for reorganizing the storage of data in memory to energize less than all of the memory devices of a memory module for read or write transactions. The memory devices may be connected to individual select lines such that a re-order logic may determine the memory devices to energize for a transaction according to a re-ordered memory map. The re-order logic may re-order memory addresses such that memory address provided by a processor for a transaction are converted to the re-ordered memory address according to the re-ordered memory map without the processor having to change its memory address scheme. The re-ordered memory map may provide for reduced energy consumption by the memory devices, or a balance of energy consumption and performance speed for latency tolerant processes.
Abstract:
Various embodiments of methods and systems for hardware (“HW”) based dynamic memory management in a portable computing device (“PCD”) are disclosed. One exemplary method includes generating a lookup table (“LUT”) to track each memory page located across multiple portions of a volatile memory. The records in the LUT are updated to keep track of data locations. When the PCD enters a sleep state to conserve energy, the LUT may be queried to determine which specific memory pages in a first portion of volatile memory (e.g., an upper bank) contain data content and which pages in a second portion of volatile memory (e.g., a lower bank) are available for receipt of content. Based on the query, the location of the data in the memory pages of the upper bank is known and can be quickly migrated to memory pages in the lower bank which are identified for receipt of the data.
Abstract:
Systems and method are directed to reducing power consumption of data transfer between a processor and a memory. A data to be transferred on a data bus between the processor and the memory is checked for a first data pattern, and if the first data pattern is present, transfer of the first data pattern is suppressed on the data bus. Instead, a first address corresponding to the first data pattern is transferred on a second bus between the processor and the memory. The first address is smaller than the first data pattern. The processor comprises a processor-side first-in-first-out (FIFO) and the memory comprises a memory-side FIFO, wherein the first data pattern is present at the first address in the processor-side FIFO and at the first address in the memory-side FIFO.
Abstract:
Various aspects include methods for managing memory subsystems on a computing device. Various aspect methods may include determining a period of time to force a memory subsystem on the computing device into a low power mode, inhibiting memory access requests to the memory subsystem during the determined period of time, forcing the memory subsystem into the low power mode for the determined period of time, and executing the memory access requests to the memory subsystem inhibited during the determined period of time in response to expiration of the determined period of time.