Abstract:
Systems and methods are disclosed for providing secure access to a non-volatile random access memory. One such method comprises sending an unlock password to a non-volatile random access memory (NVRAM) in response to a trusted boot program executing on a system on chip (SoC). The NVRAM compares the unlock password to a pass gate value provisioned in the NVRAM. If the unlock password matches the pass gate value, a pass gate is unlocked to enable the SoC to access a non-volatile cell array in the NVRAM.
Abstract:
Systems, methods, and computer programs are disclosed for selectively compressing/decompressing flash storage data. An embodiment of a system comprises a compression/decompression component, a flash memory device, a flash controller in communication with the flash memory device, and a storage driver in communication with the compression/decompression component and the flash controller. The storage driver is configured to selectively control compression and decompression of data stored in the flash memory device, via the compression/decompression component, according to a storage usage collar comprising an upper usage threshold and a lower usage threshold.
Abstract:
Systems and methods are disclosed for providing memory channel interleaving with selective power or performance optimization. One such method involves configuring a memory address map for two or more memory devices accessed via two or more respective memory channels with an interleaved region and a linear region. The interleaved region comprises an interleaved address space for relatively higher performance use cases. The linear region comprises a linear address space for relatively lower power use cases. Memory requests are received from one or more clients. The memory requests comprise a preference for power savings or performance. Received memory requests are assigned to the linear region or the interleaved region according to the preference for power savings or performance.
Abstract:
Systems and methods are disclosed for reducing memory power consumption via pre-filled dynamic random access memory (DRAM) values. One embodiment is a method for providing DRAM values. A fill request is received from an executing program to fill an allocated portion of the DRAM with a predetermined pattern of values. The predetermined pattern of values is stored in a fill value memory residing in the DRAM. A fill command is sent to the DRAM. In response to the fill command, a plurality of sense amp latches are connected to the fill value memory to update the corresponding sense amp latch bits with the predetermined pattern of values stored in the fill value memory.
Abstract:
Systems and methods are disclosed for implementing error correction control regions (ECC) in a memory device without the need to ECC protect the entire memory device. An exemplary method comprises defining one or more ECC regions in a memory device, the memory device coupled to a system on a chip (SoC). An ECC block is provided on the SoC, the ECC block in communication with the one or more ECC regions in the memory device. A determination is made with the ECC block whether to store data in a first of the one or more ECC regions. Responsive to the determination ECC bits are generating for, and interleaved with, the received data and interleaved ECC bits and data are caused to be written to the first ECC region. Otherwise, received data is sent to a non-ECC region of the memory device.
Abstract:
Systems, methods, and computer programs are disclosed for allocating memory in a hybrid parallel/serial memory system. One method comprises configuring a memory address map for a multi-rank memory system with a dedicated serial access region in a first memory rank and a dedicated parallel access region in a second memory rank. A request is received for a virtual memory page. If the request comprises a performance hint, the virtual memory page is selectively assigned to a free physical page in the dedicated serial access in the first memory rank and the dedicated parallel access region in the second memory rank.
Abstract:
A computing device and methods for exposing a solid-state non-volatile memory element to multiple masters in a computing device are disclosed. A portion of a solid-state non-volatile memory element includes code and data for use by a non-boot processing resource. A host controller in communication with the solid-state non-volatile memory element is modified to receive and respond to a resource identifier unique to the processing resource that is requesting read access to the solid-state non-volatile memory element. Logic executed by a boot master and logic executed by a non-boot processing resource are synchronized in response to a set of indicators.
Abstract:
Systems and methods are disclosed for expanding memory for a system on chip (SoC). A memory card is loaded in an expandable memory socket electrically and is coupled to a system on chip (SoC) via an expansion bus. The memory card comprises a first volatile memory device. In response to detecting the memory card, an expanded virtual memory map is configured. The expanded virtual memory map comprises a first virtual memory space associated the first volatile memory device and a second virtual memory space associated with a second volatile memory device electrically coupled to the SoC via a memory bus. One or more peripheral images associated with the second virtual memory space are relocated to a first portion of the first virtual memory space. A second portion of the first virtual memory space is configured as a block device for performing swap operations associated with the second virtual memory space.
Abstract:
Systems and methods are disclosed for providing memory channel interleaving with selective power or performance optimization. One such method comprises configuring a memory address map for two or more memory devices accessed via two or more respective memory channels with an interleaved region and a linear region. The interleaved region comprises an interleaved address space for relatively higher performance tasks. The linear region comprises a linear address space for relatively lower power tasks. A request is received from a process for a virtual memory page. The request comprises a preference for power savings or performance. The virtual memory page is assigned to a free physical page in the linear region or the interleaved region according to the preference for power savings or performance.
Abstract:
Systems, methods, and computer programs are disclosed for scheduling volatile memory maintenance events. One embodiment is a method comprising: a memory controller determining a time-of-service (ToS) window for executing a maintenance event for a volatile memory device coupled to the memory controller via a memory data interface; the memory controller providing a signal to each of a plurality of processors on a system on chip (SoC) for scheduling the maintenance event; each of the plurality of processors independently generating in response to the signal a corresponding schedule notification for the maintenance event; and the memory controller determining when to execute the maintenance event in response to receiving one or more of the schedule notifications generated by the plurality of processors and based on a processor priority scheme.