Abstract:
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
Abstract:
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
Abstract:
A data access system including a storage drive, processor and cache module. The processor, in response to data required by the processor not being cached within one or more levels of cache of the processor, generates a first physical address (PA). The cache module includes a memory and first and second controllers. The memory is a final level of cache. The first controller converts the first PA into a virtual address. The second controller: converts the virtual address into a second PA; based on the second PA, determines whether the data is cached within the memory; and if the data is cached, accesses and forwards the data to the processor. The first or second controller determines whether a cache miss has occurred and, in response to a cache miss and based on the second PA or a third PA of the storage drive, retrieves the data from the storage drive.
Abstract:
Systems, methods, and computer programs are disclosed for adaptive compression-based demand paging. Two or more compressed software image segments are stored in each of one or more memories. Each compressed software image segment corresponds to at least one software task and includes one or more pages that are compressed in accordance with a compression characteristic different from that of the other software image segments. If it is determined that a page request associated with an executing software task identifies a page that is not stored in the system memory, then a portion of the compressed software image segment containing the identified page is decompressed, and the decompressed page is stored in the system memory.
Abstract:
Examples of enabling cache read optimization for mobile memory devices are described. One or more access commands may be received, from a host, at a memory device. The one or more access commands may instruct the memory device to access at least two data blocks. The memory device may generate pre-fetch information for the at least two data blocks based at least in part on an order of accessing the at least two data blocks.
Abstract:
A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems.
Abstract:
Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
Abstract:
Systems and methods are provided that that may be implemented to detect impeded flow of cooling air within a chassis enclosure of an information handling system based on sensed air pressure and/or air pressure changes occurring within the cooling air flow while the system is actively running. The systems and methods may be further implemented to take one or more thermal management actions based on sensed air pressure within the chassis enclosure together with other optional sensed parameters (e.g., such as sensed temperatures and/or sensed user operating mode based on accelerometer and/or gyroscope sensor input).
Abstract:
Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
Abstract:
A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems.