Abstract:
The disclosed computing device can include super flow control unit (flit) generation circuitry configured to generate a super flit containing two or more flits having two or more requests embedded therein, wherein the two or more requests have the same destination node identifiers and the super flit has a variable size based on a flit size and a number of existing requests in a source node that target a same destination node. The device can additionally include authentication circuitry configured to append a message authentication code to a last flit of the super flit. The device can also include communication circuitry configured to send the super flit to a network switch configured to route the super flit to a destination node corresponding to the same destination node identifiers. Various other methods, systems, and computer-readable media are also disclosed.
Abstract:
A system and method are presented. Some embodiments include a processing unit, at least one memory coupled to the processing unit, and at least one cache coupled to the processing unit and divided into a series of blocks, wherein at least one of the series of cache blocks includes data identified as being in a modified state. The modified state data is flushed by writing the data to the at least one memory based on a write back policy and the aggressiveness of the policy is based on at least one factor including the number of idle cores, the proximity of the last cache flush, the activity of the thread associated with the data, and which cores are idle and if the idle core is associated with the data.
Abstract:
The described embodiments include a core that uses predictions for store-to-load forwarding. In the described embodiments, the core comprises a load-store unit, a store buffer, and a prediction mechanism. During operation, the prediction mechanism generates a prediction that a load will be satisfied using data forwarded from the store buffer because the load loads data from a memory location in a stack. Based on the prediction, the load-store unit first sends a request for the data to the store buffer in an attempt to satisfy the load using data forwarded from the store buffer. If data is returned from the store buffer, the load is satisfied using the data. However, if the attempt to satisfy the load using data forwarded from the store buffer is unsuccessful, the load-store unit then separately sends a request for the data to a cache to satisfy the load.
Abstract:
A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, wherein a first bank is powered down. In response a write request to a second bank for data indicated to be stored in the powered down first bank, the cache controller determines a respective bypass condition for the data. If the bypass condition exceeds a threshold, then the cache controller invalidates any copy of the data stored in the second bank. If the bypass condition does not exceed the threshold, then the cache controller stores the data with a clean state in the second bank. The cache controller writes the data in a lower-level memory for both cases.
Abstract:
A multi-core data processor includes multiple data processor cores and a power controller. Each data processor core has a first input for receiving a clock signal, a second input for receiving a power supply voltage, and an output for providing an idle signal. The power controller is coupled to each of the data processor cores for providing the clock signal and the power supply voltage to each of the data processor cores. The power controller provides at least one of the clock signal and the power supply voltage to an active one of the data processor cores in dependence on a number of idle signals received from the data processor cores.
Abstract:
A system and method are presented. Some embodiments include a processing unit, at least one memory coupled to the processing unit, and at least one cache coupled to the processing unit and divided into a series of blocks, wherein at least one of the series of cache blocks includes data identified as being in a modified state. The modified state data is flushed by writing the data to the at least one memory based on a write back policy and the aggressiveness of the policy is based on at least one factor including the number of idle cores, the proximity of the last cache flush, the activity of the thread associated with the data, and which cores are idle and if the idle core is associated with the data.
Abstract:
A method of storing stack data in a cache hierarchy is provided. The cache hierarchy comprises a data cache and a stack filter cache. Responsive to a request to access a stack data block, the method stores the stack data block in the stack filter cache, wherein the stack filter cache is configured to store any requested stack data block.
Abstract:
A processing system includes one or more processing units to perform operations and one or more sensors to measure a temperature concurrently with the one or more processing units performing the operations. The processing system also includes a controller to receive feedback indicating the temperature and to determine a peak temperature and a thermal time constant for heating of the processing system based on a comparison of the measured temperature to a first temperature that is predicted based on the peak temperature and a previously determined thermal time constant for heating. Some embodiments of the controller can control a performance state of the processing system based on the peak temperature and the thermal time constant for heating of the processing system.
Abstract:
A method of partitioning a data cache comprising a plurality of sets, the plurality of sets comprising a plurality of ways, is provided. Responsive to a stack data request, the method stores a cache line associated with the stack data in one of a plurality of designated ways of the data cache, wherein the plurality of designated ways is configured to store all requested stack data.
Abstract:
A method includes controlling active frequency states of a plurality of heterogeneous processing units based on frequency sensitivity metrics indicating performance coupling between different types of processing units in the plurality of heterogeneous processing units. A processor includes a plurality of heterogeneous processing units and a performance controller to control active frequency states of the plurality of heterogeneous processing units based on frequency sensitivity metrics indicating performance coupling between different types of processing units in the plurality of heterogeneous processing units. The active frequency state of a first type of processing unit in the plurality of heterogeneous processing units is controlled based on a first activity metric associated with a first type of processing unit and a second activity metric associated with a second type of processing unit.