Abstract:
An example method of storing a partial target address in an instruction cache includes receiving a branch instruction. The method also includes predicting a direction of the branch instruction as being not taken. The method further includes calculating a destination address based on executing the branch instruction. The method also includes determining a partial target address using the destination address. The method further includes in response to the predicted direction of the branch instruction changing from not taken to taken, replacing an offset in an instruction cache with the partial target address.
Abstract:
In a particular embodiment, an apparatus includes control logic configured to selectively set bits of a multi-bit way prediction mask based on a prediction mask value. The control logic is associated with an instruction cache including a data array. A subset of line drivers of the data array is enabled responsive to the multi-bit way prediction mask. The subset of line drivers includes multiple line drivers.
Abstract:
Systems and methods for tracking and switching between execution modes in processing systems. A processing system is configured to execute instructions in at least two instruction execution triodes including a first and second execution mode chosen from a classic/aligned mode and a compressed/unaligned mode. Target addresses of selected instructions such as calls and returns are forcibly misaligned in the compressed mode, such one or more bits, such as, the least significant bits (alignment bits) of the target address in the compressed mode are different from the corresponding alignment bits in the classic mode. When the selected instructions are encountered during execution in the first mode, a decision to switch operation to the second mode is based on analyzing the alignment bits of the target address of the selected instruction.
Abstract:
A system and method to select a packet format based on a number of executed threads is disclosed. In a particular embodiment, a method includes determining, at a multi-threaded processor, a number of threads of a plurality of threads executing during a time period. A packet format is determined from a plurality of formats based at least in part on the determined number of threads. Data associated with execution of an instruction by a particular thread is stored in accordance with the selected format in a memory (e.g., a buffer).
Abstract:
A method includes receiving an instruction to be executed by a processor. The method further includes performing a lookup in a page crossing buffer that includes one or more entries to determine if the instruction has an entry in the page crossing buffer. Each of the entries includes a physical address. The method further includes, when the page crossing buffer has the entry in the page crossing buffer, retrieving a particular physical address from the entry in the page crossing buffer.
Abstract:
Techniques are addressed for parallel dispatch of coprocessor and thread instructions to a coprocessor coupled to a threaded processor. A first packet of threaded processor instructions is accessed from an instruction fetch queue (IFQ) and a second packet of coprocessor instructions is accessed from the IFQ. The IFQ includes a plurality of thread queues that are each configured to store instructions associated with a specific thread of instructions. A dispatch circuit is configured to select the first packet of thread instructions from the IFQ and the second packet of coprocessor instructions from the IFQ and send the first packet to a threaded processor and the second packet to the coprocessor in parallel. A data port is configured to share data between the coprocessor and a register file in the threaded processor. Data port operations are accomplished without affecting operations on any thread executing on the threaded processor.
Abstract:
In a particular embodiment, an apparatus includes control logic configured to selectively set bits of a multi-bit way prediction mask based on a prediction mask value. The control logic is associated with an instruction cache including a data array. A subset of line drivers of the data array is enabled responsive to the multi-bit way prediction mask. The subset of line drivers includes multiple line drivers.
Abstract:
Systems and methods for tracking and switching between execution modes in processing systems. A processing system is configured to execute instructions in at least two instruction execution triodes including a first and second execution mode chosen from a classic/aligned mode and a compressed/unaligned mode. Target addresses of selected instructions such as calls and returns are forcibly misaligned in the compressed mode, such one or more bits, such as, the least significant bits (alignment bits) of the target address in the compressed mode are different from the corresponding alignment bits in the classic mode. When the selected instructions are encountered during execution in the first mode, a decision to switch operation to the second mode is based on analyzing the alignment bits of the target address of the selected instruction.
Abstract:
In a particular embodiment, a method includes identifying one or more way prediction characteristics of an instruction. The method also includes selectively reading, based on identification of the one or more way prediction characteristics, a table to identify an entry of the table associated with the instruction that identifies a way of a data cache. The method further includes making a prediction whether a next access of the data cache based on the instruction will access the way.
Abstract:
An apparatus includes a translation lookaside buffer (TLB). The TLB includes at least one entry that includes an entry virtual address and an entry page size indication corresponding to an entry page. The apparatus also includes input logic configured to receive an input page size indication and an input virtual address corresponding to an input page. The apparatus further includes overlap checking logic configured to determine, based at least in part on the entry page size indication and the input page size indication, whether the input page overlaps the entry page.