Abstract:
When legacy instructions, that can only operate on smaller registers, are mixed with new instructions in a processor with larger registers, special handling and architecture are used to prevent the legacy instructions from causing problems with the data in the upper portion of the registers, i.e., the portion that they cannot directly access. In some embodiments, the upper portion of the registers are saved to temporary storage while the legacy instructions are operating, and restored to the upper portion of the registers when the new instructions are operating. A special instruction may also be used to disable this save/restore operation if the new instruction are not going to use the upper part of the registers.
Abstract:
An approach for data bus power control. Data input sense amplifiers of a request agent are enabled prior to a data phase of a transaction according to a data bus power control signal. Once enabled, the data input sense amplifiers can capture data provided during the data phase of the read transaction. Accordingly, the data input sense amplifiers of the request agent are disabled according to the power control signal once the data phase of the read transaction is complete.
Abstract:
For one embodiment, a computer system includes both high power and low power buses coupling a processor to a controller. When the processor is in a high power mode, its cache is snooped by the controller via the high power bus. When the processor is in a low power mode, its cache is snooped by the controller via the low power bus.
Abstract:
A system is described that includes a microprocessor and a thermal control subsystem. The microprocessor includes execution resources to support processing of instructions and consumes power. The microprocessor also includes at least one throttling mechanism to reduce the amount of heat generated by the microprocessor. The thermal control subsystem is configured to estimate an amount of power used by the microprocessor and to control the throttling mechanism based on the estimated amount of current power usage to ensure that junction temperature will not exceed the maximum allowed temperature.
Abstract:
When legacy instructions, that can only operate on smaller registers, are mixed with new instructions in a processor with larger registers, special handling and architecture are used to prevent the legacy instructions from causing problems with the data in the upper portion of the registers, i.e., the portion that they cannot directly access. In some embodiments, the upper portion of the registers are saved to temporary storage while the legacy instructions are operating, and restored to the upper portion of the registers when the new instructions are operating. A special instruction may also be used to disable this save/restore operation if the new instruction are not going to use the upper part of the registers.
Abstract:
Controlling a reorder buffer (ROB) to selectively perform functional hardware lock disabling (HLD) is described. One apparatus embodiment includes a unit to enable an ROB to selectively disable a lock upon Identifying a lock acquire operation (LAO) associated with a critical section (CS) entry point, a unit to selectively retire the LAO, a unit to cause the ROB to selectively disable the lock, and a unit to snoop a buffer. The apparatus may, based on the snooping, selectively abort a transaction associated with the CS.
Abstract:
A power aware front-end unit for a processor may include a UOP cache that disables other circuitry within the front-end unit. In an embodiment, a front-end unit may disable instruction synchronization circuitry, instruction decode circuitry and, optionally, instruction fetch circuitry while instruction look-ups are underway in both a block cache and an instruction cache. If the instruction look-up indicates a miss, the disabled circuitry thereafter may be enabled.
Abstract:
Distribution of processing activity across processing hardware based on power consumption and/or thermal considerations. One embodiment includes a plurality of processing units and a monitor to obtain monitor (e.g., power consumption, or temperature or some combination thereof) values from the processing units. The monitor transfers a process from one processing unit to another in response to the monitor values from the processing units.
Abstract:
In one embodiment, the present invention includes logic to receive a permute instruction, first and second source operands, and control values, and to perform a permute operation based on an operation between at least two of the control values so that selected portions of the first and second source operands or a predetermined value can be stored into elements of a destination. Multiple permute instructions may be combined to perform efficient table lookups. Other embodiments are described and claimed.
Abstract:
In one embodiment, the present invention includes logic to receive a permute instruction, first and second source operands, and control values, and to perform a permute operation based on an operation between at least two of the control values. Multiple permute instructions may be combined to perform efficient table lookups. Other embodiments are described and claimed.