摘要:
A multithreaded processor includes an interrupt controller for processing a cross-thread interrupt directed from a requesting thread to a destination thread. The interrupt controller in an illustrative embodiment receives a request for delivery of the cross-thread interrupt to the destination thread, determines whether the destination thread of the cross-thread interrupt is enabled for receipt of cross-thread interrupts, and utilizes a thread identifier to control delivery of the cross-thread interrupt to the destination thread if the destination thread is enabled for receipt of cross-thread interrupts. The requesting thread requests delivery of the cross-thread interrupt to the destination thread by setting a corresponding interrupt pending bit in a flag register of the multithreaded processor. The destination thread is enabled for receipt of cross-thread interrupts if a corresponding enable bit is set in an enable register of the multithreaded processor. The flag and enable registers may be implemented within the interrupt controller.
摘要:
A multithreaded processor comprises a plurality of hardware thread units, an instruction decoder coupled to the thread units for decoding instructions received therefrom, and a plurality of execution units for executing the decoded instructions. The multithreaded processor is configured for controlling an instruction issuance sequence for threads associated with respective ones of the hardware thread units. On a given processor clock cycle, only a designated one of the threads is permitted to issue one or more instructions, but the designated thread that is permitted to issue instructions varies over a plurality of clock cycles in accordance with the instruction issuance sequence. The instructions are pipelined in a manner which permits at least a given one of the threads to support multiple concurrent instruction pipelines.
摘要:
A multithreaded processor comprises a plurality of hardware thread units, an instruction decoder coupled to the thread units for decoding instructions received therefrom, and a plurality of execution units for executing the decoded instructions. The multithreaded processor is configured for controlling an instruction issuance sequence for threads associated with respective ones of the hardware thread units. On a given processor clock cycle, only a designated one of the threads is permitted to issue one or more instructions, but the designated thread that is permitted to issue instructions varies over a plurality of clock cycles in accordance with the instruction issuance sequence. The instructions are pipelined in a manner which permits at least a given one of the threads to support multiple concurrent instruction pipelines.
摘要:
A multithreaded processor comprises a plurality of hardware thread units, an instruction decoder coupled to the thread units for decoding instructions received therefrom, and a plurality of execution units for executing the decoded instructions. The multithreaded processor is configured for controlling an instruction issuance sequence for threads associated with respective ones of the hardware thread units. On a given processor clock cycle, only a designated one of the threads is permitted to issue one or more instructions, but the designated thread that is permitted to issue instructions varies over a plurality of clock cycles in accordance with the instruction issuance sequence. The instructions are pipelined in a manner which permits at least a given one of the threads to support multiple concurrent instruction pipelines.
摘要:
A cache memory for use in a multithreaded processor includes a number of set-associative thread caches, with one or more of the thread caches each implementing an eviction process based on access request address that reduces the amount of replacement policy storage required in the cache memory. At least a given one of the thread caches in an illustrative embodiment includes a memory array having multiple sets of memory locations, and a directory for storing tags each corresponding to at least a portion of a particular address of one of the memory locations. The directory has multiple entries each storing multiple ones of the tags, such that if there are n sets of memory locations in the memory array, there are n tags associated with each directory entry. The directory is utilized in implementing a set-associative address mapping between access requests and memory locations of the memory array. An entry in a particular one of the memory locations is selected for eviction from the given thread cache in conjunction with a cache miss event, based at least in part on at least a portion of an address in an access request associated with the cache miss event.
摘要:
Techniques for token triggered multithreading in a multithreaded processor are disclosed. An instruction issuance sequence for a plurality of threads of the multithreaded processor is controlled by associating with each of the threads at least one register which stores a value identifying a next thread to be permitted to issue one or more instructions, and utilizing the stored value to control the instruction issuance sequence. For example, each of a plurality of hardware thread units of the multithreaded processor may include a corresponding local register updatable by that hardware thread unit, with the local register for a given one of the hardware thread units storing a value identifying the next thread to be permitted to issue one or more instructions after the given hardware thread unit has issued one or more instructions. A global register arrangement may also or alternatively be used. The processor may be configured so as to permit the instruction issuance sequence to correspond to an arbitrary alternating even-odd sequence of threads, without introducing blocking conditions leading to thread stalls.
摘要:
A multithreaded processor comprises a plurality of hardware thread units, an instruction decoder coupled to the thread units for decoding instructions received therefrom, and a plurality of execution units for executing the decoded instructions. The multithreaded processor is configured for controlling an instruction issuance sequence for threads associated with respective ones of the hardware thread units. On a given processor clock cycle, only a designated one of the threads is permitted to issue one or more instructions, but the designated thread that is permitted to issue instructions varies over a plurality of clock cycles in accordance with the instruction issuance sequence. The instructions are pipelined in a manner which permits at least a given one of the threads to support multiple concurrent instruction pipelines.
摘要:
A multithreaded processor comprises a plurality of hardware thread units, an instruction decoder coupled to the thread units for decoding instructions received therefrom, and a plurality of execution units for executing the decoded instructions. The multithreaded processor is configured for controlling an instruction issuance sequence for threads associated with respective ones of the hardware thread units. On a given processor clock cycle, only a designated one of the threads is permitted to issue one or more instructions, but the designated thread that is permitted to issue instructions varies over a plurality of clock cycles in accordance with the instruction issuance sequence. The instructions are pipelined in a manner which permits at least a given one of the threads to support multiple concurrent instruction pipelines.
摘要:
A cache memory for use in a multithreaded processor includes a number of set-associative thread caches, with one or more of the thread caches each implementing a thread-based eviction process that reduces the amount of replacement policy storage required in the cache memory. At least a given one of the thread caches in an illustrative embodiment includes a memory array having multiple sets of memory locations, and a directory for storing tags each corresponding to at least a portion of a particular address of one of the memory locations. The directory has multiple entries each storing multiple ones of the tags, such that if there are n sets of memory locations in the memory array, there are n tags associated with each directory entry. The directory is utilized in implementing a set-associative address mapping between access requests and memory locations of the memory array. An entry in a particular one of the memory locations is selected for eviction from the given thread cache in conjunction with a cache miss event, based at least in part on at least a portion of a thread identifier of the given thread cache.
摘要:
Techniques for thread-based memory access by a multithreaded processor are disclosed. The multithreaded processor determines a thread identifier associated with a particular processor thread, and utilizes at least a portion of the thread identifier to select a particular portion of an associated memory to be accessed by the corresponding processor thread. In an illustrative embodiment, a first portion of the thread identifier is utilized to select one of a plurality of multiple-bank memory elements within the memory, and a second portion of the thread identifier is utilized to select one of a plurality of memory banks within the selected one of the multiple-bank memory elements. The first portion may comprise one or more most significant bits of the thread identifier, while the second portion comprises one or more least significant bits of the thread identifier. Advantageously, the invention reduces memory access times and power consumption, while preventing the stalling of any processor threads.