摘要:
Embodiments of the present invention provide a flash memory device write-access management amongst different virtual machines (VMs) in a virtualized computing environment. In one embodiment, a virtualized computing data processing system can include a host computer with at least one processor and memory and different VMs executing in the host computer. The system also can include a flash memory device coupled to the host computer and accessible by the VMs. Finally, a flash memory controller can manage access to the flash memory device. The controller can include program code enabled to compute a contemporaneous bandwidth of requests for write operations for the flash memory device, to allocate a corresponding number of tokens to the VMs, to accept write requests to the flash memory device from the VMs only when accompanied by a token and to repeat the computing, allocating and accepting after a lapse of a pre-determined time period.
摘要:
Embodiments of the invention are directed to optimizing the performance of a split disk cache. In one embodiment, a disk cache includes a primary region having a read portion and write portion and one or more smaller, sample regions also including a read portion and a write portion. The primary region and one or more sample region each have an independently adjustable ratio of a read portion to a write portion. Cached reads are distributed among the read portions of the primary and sample region, while cached writes are distributed among the write portions of the primary and sample region. The performance of the primary region and the performance of the sample region are tracked, such as by obtaining a hit rate for each region during a predefined interval. The read/write ratio of the primary region is then selectively adjusted according to the performance of the one or more sample regions.
摘要:
Embodiments that dynamically reorganize data of cache lines in non-uniform cache access (NUCA) caches are contemplated. Various embodiments comprise a computing device, having one or more processors coupled with one or more NUCA cache elements. The NUCA cache elements may comprise one or more banks of cache memory, wherein ways of the cache are horizontally distributed across multiple banks. To improve access latency of the data by the processors, the computing devices may dynamically propagate cache lines into banks closer to the processors using the cache lines. To accomplish such dynamic reorganization, embodiments may maintain “direction” bits for cache lines. The direction bits may indicate to which processor the data should be moved. Further, embodiments may use the direction bits to make cache line movement decisions.
摘要:
An adaptive prediction threshold scheme for dynamically adjusting prediction thresholds of entries in a Pattern History Table (PHT) by observing global tendencies of the branch or branches that index into the PHT entries. A count value of a prediction state counter representing a prediction state of a prediction state machine for a PHT entry is obtained. Count values in a set of counters allocated to the entry in the PHT are changed based on the count value of the entry's prediction state counter. The prediction threshold of the prediction state machine for the entry may then be adjusted based on the changed count values in the set of counters, wherein the prediction threshold is adjusted by changing a count value in a prediction threshold counter in the entry, and wherein adjusting the prediction threshold redefines predictions provided by the prediction state machine.
摘要:
The illustrative embodiments provide a method, a computer program product, and an apparatus for managing a cache. A probability of a future request for data to be stored in a portion of the cache by a thread is identified for each of the number of threads to form a number of probabilities. The data is stored with a rank in a number of ranks in the portion of the cache responsive to receiving the future request from the thread in the number of threads for the data. The rank is selected using the probability in the number of probabilities for the thread.
摘要:
A system and method to optimize runahead operation for a processor without use of a separate explicit runahead cache structure. Rather than simply dropping store instructions in a processor runahead mode, store instructions write their results in an existing processor store queue, although store instructions are not allowed to update processor caches and system memory. Use of the store queue during runahead mode to hold store instruction results allows more recent runahead load instructions to search retired store queue entries in the store queue for matching addresses to utilize data from the retired, but still searchable, store instructions. Retired store instructions could be either runahead store instructions retired, or retired store instructions that executed before entering runahead mode.
摘要:
A processing system is disclosed. The processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested.
摘要:
A single unified level one instruction cache in which some lines may contain traces and other lines in the same congruence class may contain blocks of instructions consistent with conventional cache lines. A mechanism is described for indexing into the cache, and selecting the desired line. Control is exercised over which lines are contained within the cache. Provision is made for selection between a trace line and a conventional line when both match during a tag compare step.
摘要:
A single unified level one instruction cache in which some lines may contain traces and other lines in the same congruence class may contain blocks of instructions consistent with conventional cache lines. Control is exercised over which lines are contained within the cache. This invention avoids inefficiencies in the cache by removing trace lines experiencing early exits from the cache, or trace lines that are short, by maintaining a few bits of information about the accuracy of the control flow in a trace cache line and using that information in addition to the LRU (Least Recently Used) bits that maintain the recency information of a cache line, in order to make a replacement decision.
摘要:
A design structure embodied in a machine readable storage medium for at least one of designing, manufacturing, and testing a design for a single unified level one instruction cache in which some lines may contain traces and other lines in the same congruence class may contain blocks of instructions consistent with conventional cache lines is provided. Instruction branches are predicted taken or not taken using a highly accurate branch history table (BHT). Branches that are predicted not taken are appended to a trace buffer and the next basic block is constructed from the remaining instructions in the fetch buffer. Branches that are predicted taken flush the remaining fetch buffer and the next address is determined using a Branch Target Address Register (BTAC).