Abstract:
A system and technique for cache line memory access includes a processor, a sectored cache, a memory, a memory controller, and logic. The logic is executable to, responsive to a miss in the cache of a sector address requested by the processor, request a cache line from the memory. The cache line request is divided into first and second cache subline requests. A determination is made as to which of the first and second cache subline requests corresponds to the requested sector address. Responsive to determining that the first cache subline request corresponds to the requested sector address, the first cache subline request is placed into a high priority queue of the memory controller and the second cache subline request is placed into a low priority queue of the memory controller. Requests from the high priority queue are serviced before requests from the low priority queue.
Abstract:
According to one aspect of the present disclosure, a method and technique for performance-driven cache line memory access is disclosed. The method includes: receiving, by a memory controller of a data processing system, a request for a cache line; dividing the request into a plurality of cache subline requests, wherein at least one of the cache subline requests comprises a high priority data request and at least one of the cache subline requests comprises a low priority data request; servicing the high priority data request; and delaying servicing of the low priority data request until a low priority condition has been satisfied.
Abstract:
According to one aspect of the present disclosure a system and technique for performance-driven cache line memory access is disclosed. The system includes: a processor, a cache hierarchy coupled to the processor, and a memory coupled to the cache hierarchy. The system also includes logic executable to, responsive to receiving a request for a cache line: divide the request into a plurality of cache subline requests, wherein at least one of the cache subline requests comprises a high priority data request and at least one of the cache subline requests comprises a low priority data request; service the high priority data request; and delay servicing of the low priority data request until a low priority condition has been satisfied.
Abstract:
According to one aspect of the present disclosure a system and technique for performance-driven cache line memory access is disclosed. The system includes: a processor, a cache hierarchy coupled to the processor, and a memory coupled to the cache hierarchy. The system also includes logic executable to, responsive to receiving, a request for a cache line: divide the request into a plurality of cache subline requests, wherein at least one of the cache subline requests comprises a high priority data request and at least one of the cache subline requests comprises a low priority data request; service the high priority data request; and delay servicing of the low priority data request until a low priority condition has been satisfied.
Abstract:
A storage subsystem combining solid state drive (SSD) and hard disk drive (HDD) technologies provides low access latency and low complexity. Separate free lists are maintained for the SSD and the HDD and blocks of file system data are stored uniquely on either the SSD or the HDD. When a read access is made to the subsystem, if the data is present on the SSD, the data is returned, but if the block is present on the HDD, it is migrated to the SSD and the block on the HDD is returned to the HDD free list. On a write access, if the block is present in the either the SSD or HDD, the block is overwritten, but if the block is not present in the subsystem, the block is written to the HDD.
Abstract:
A method, system and computer-usable medium are disclosed for a lock-spin-wait operation for managing multi-threaded applications in a multi-core computing environment. A target processor core, referred to as a “spin-wait core” (SWC), is assigned (or reserved) for primarily running spin-waiting threads. Threads operating in the multi-core computing environment that are identified as spin-waiting are then moved to a run queue associated with the SWC to acquire a lock. The spin-waiting threads are then allocated a lock response time that is less than the default lock response time of the operating system (OS) associated with the SWC. If a spin-waiting fails to acquire a lock within the allocated lock response time, the SWC is relinquished, ceding its availability for other spin-waiting threads in the run queue to acquire a lock. Once a spin-waiting thread acquires a lock, it is migrated to its original, or an available, processor core.
Abstract:
According to one aspect of the present disclosure, a method and technique for performance-driven cache line memory access is disclosed. The method includes: receiving, by a memory controller of a data processing system, a request for a cache line; dividing the request into a plurality of cache subline requests, wherein at least one of the cache subline requests comprises a high priority data request and at least one of the cache subline requests comprises a low priority data request; servicing the high priority data request; and delaying servicing of the low priority data request until a low priority condition has been satisfied.
Abstract:
A storage subsystem combining solid state drive (SSD) and hard disk drive (HDD) technologies provides low access latency and low complexity. Separate free lists are maintained for the SSD and the HDD and blocks of file system data are stored uniquely on either the SSD or the HDD. When a read access is made to the subsystem, if the data is present on the SSD, the data is returned, but if the block is present on the HDD, it is migrated to the SSD and the block on the HDD is returned to the HDD free list. On a write access, if the block is present in the either the SSD or HDD, the block is overwritten, but if the block is not present in the subsystem, the block is written to the HDD.