摘要:
The invention sets forth a crossbar unit that includes multiple virtual channels, each virtual channel being a logical flow of data within the crossbar unit. Arbitration logic coupled to source client subsystems is configured to select a virtual channel for transmitting a data request or a data packet to a destination client subsystem based on the type of the source client subsystem and/or the type of data request. Higher priority traffic is transmitted over virtual channels that are configured to transmit data without causing deadlocks and/or stalls. Lower priority traffic is transmitted over virtual channels that can be stalled.
摘要:
Systems and methods are disclosed for managing the number of affirmatively associated cache lines related to the different sets of a data cache unit. A tag look-up unit implements two thresholds, which may be configurable thresholds, to manage the number of cache lines related to a given set that store dirty data or are reserved for in-flight read requests. If the number of affirmatively associated cache lines in a given set is equal to a maximum threshold, the tag look-up unit stalls future requests that require an available cache line within that set to be affirmatively associated. To reduce the number of stalled requests, the tag look-up unit transmits a high priority clean notification to a frame buffer logic when the number of affirmatively associated cache lines in a given set approaches the maximum threshold. The frame buffer logic then processes requests associated with that set preemptively.
摘要:
A method for cleaning dirty data in an intermediate cache is disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes may include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.
摘要:
Write operations to a unit of compressible memory, known as a compression tile, are examined to see if data blocks to be written completely cover a single compression tile. If the data blocks completely cover a single compression tile, the write operations are coalesced into a single write operation and the single compression tile is overwritten with the data blocks. Coalescing multiple write operations into a single write operation improves performance, because it avoids the read-modify-write operations that would otherwise be needed.
摘要:
One embodiment of the present invention sets forth a mechanism to schedule read data transmissions and write data transmissions to/from a cache to frame buffer logic on the L2 bus. When processing a read or a write command, a scheduling arbiter examines a bus schedule to determine that a read-read conflict, a read-write conflict or a write-read exists, and allocates an available memory space in a read buffer to store the read data causing the conflict until the read return data transmission can be scheduled. In the case of a write command, the scheduling arbiter then transmits a write request to a request buffer. When processing a write request, the request arbiter examines the request buffers to determine whether a write-write conflict. If so, then the request arbiter allocates a memory space in a request buffer to store the write request until the write data transmission can be scheduled.
摘要:
A graphics system utilizes virtual memory pages and has a partitioned graphics memory that includes memory elements. The system supports having an non-power of two number of active memory elements. Additionally, a partition swizzling operation is used to adjust the partition numbers associated with individual units of virtual memory allocation on particular virtual memory pages to achieve a selected partition interleaving pattern.
摘要:
The invention sets forth a crossbar unit that includes multiple virtual channels, each virtual channel being a logical flow of data within the crossbar unit. Arbitration logic coupled to source client subsystems is configured to select a virtual channel for transmitting a data request or a data packet to a destination client subsystem based on the type of the source client subsystem and/or the type of data request. Higher priority traffic is transmitted over virtual channels that are configured to transmit data without causing deadlocks and/or stalls. Lower priority traffic is transmitted over virtual channels that can be stalled.
摘要:
Systems and methods for addressing memory where data is interleaved across different banks using different interleaving granularities improve graphics memory bandwidth by distributing graphics data for efficient access during rendering. Various partition strides may be selected to modify the number of sequential addresses mapped to each DRAM and change the interleaving granularity. A memory addressing scheme is used to allow different partition strides for each virtual memory page without causing memory aliasing problems in which physical memory locations in one virtual memory page are also mapped to another virtual memory page. When a physical memory address lies within a virtual memory page crossing region, the smallest partition stride is used to access the physical memory.
摘要:
In some applications, such as video motion compression processing for example, a request pattern or “stream” of requests for accesses to memory (e.g., DRAM) may have, over a large number of requests, a relatively small number of requests to the same page. Due to the small number of requests to the same page, conventionally sorting to aggregate page hits may not be very effective. Reordering the stream can be used to “bury” or “hide” much of the necessary precharge/activate time, which can have a highly positive impact on overall throughput. For example, separating accesses to different rows of the same bank by at least a predetermined number of clocks can effectively hide the overhead involved in precharging/activating the rows.
摘要:
The outcome of a plurality of branch instructions in a computer program is predicted by fetching a plurality or group of instructions in a given slot, along with a corresponding prediction. A group global history (gghist) is maintained to indicate of recent program control flow. In addition, a predictor table comprising a plurality of predictions, preferably saturating counters. A particular counter is updated when a branch is encountered. The particular counter is associated with a branch instruction by hashing the fetched instruction group's program counter (PC) with the gghist. To predict multiple branch instruction outcomes, the gghist is hashed with the PC to form an index which is used to access naturally aligned but randomly ordered predictions in the predictor table, which are then reordered based on value of the lower gghits bits. Preferably, instructions are fetched in blocks of eight instructions. The gghist is maintained by shifting in a 1 if a branch in the corresponding group is taken, or a 0 if no branch in the corresponding group is taken. The hashing function is preferably an XOR operation. Preferably, a predictor table counter is incremented when a corresponding branch is taken, but not beyond a maximum value, and is decremented when the corresponding branch is not taken, but not below zero. Preferably, the most significant bit of a counter is used to determine a prediction.