Abstract:
Certain aspects of the present disclosure provide a method for performing parallel data processing, including: receiving data for parallel processing from a data processing requestor; generating a plurality of data sub-blocks; determining a plurality of data portions in each data sub-block of the plurality of data sub-blocks; changing an order of the plurality of data portions in at least one data sub-block of the plurality of data sub-blocks; providing the plurality of data sub-blocks, including the at least one data sub-block comprising the changed order of the plurality of data portions, to a plurality of processing units for parallel processing; and receiving processed data associated with the plurality of data sub-blocks from the plurality of processing units.
Abstract:
A low-power state current/power consumption for each volatile memory device in a plurality of volatile memory devices is obtained. Data is copied from a first set of the volatile memory devices to a second set of the volatile memory devices, where the second set of volatile memory devices has a lower current/power consumption than the first set of volatile memory devices. Additionally, a current/power consumption may be obtained for each memory bank within each of the plurality of volatile memory devices. Data is then copied from a first set of memory banks to a second set of memory banks within the same memory device in the second set of memory devices, where the second set of memory banks has lower current/power consumption than the first set of memory banks. The first set of volatile memory devices and/or first set of memory banks are then placed into a power-down state.
Abstract:
A system is disclosed for mapping operating-system-identified addresses for substantially-identical hardware modules into performance-parameter-based addresses for the hardware modules. The mapping is accomplished by configuring a flexible I/O interface responsive to a characterization of at least one performance parameter for each hardware module.
Abstract:
An apparatus including a printed circuit board (PCB) including a sense resistor; and an integrated circuit (IC) mounted on the PCB, wherein at least a portion of the IC draws current from a power rail, wherein the sense resistor is coupled between the power rail and the IC, wherein the sense resistor is configured to produce a sense voltage in response to the current drawn by the at least portion of the IC, and wherein the IC includes a current sensor configured to generate a signal indicative of the current drawn by the at least portion of the IC based on the sense voltage.
Abstract:
A low-power state current/power consumption for each volatile memory device in a plurality of volatile memory devices is obtained. Data is copied from a first set of the volatile memory devices to a second set of the volatile memory devices, where the second set of volatile memory devices has a lower current/power consumption than the first set of volatile memory devices. Additionally, a current/power consumption may be obtained for each memory bank within each of the plurality of volatile memory devices. Data is then copied from a first set of memory banks to a second set of memory banks within the same memory device in the second set of memory devices, where the second set of memory banks has lower current/power consumption than the first set of memory banks. The first set of volatile memory devices and/or first set of memory banks are then placed into a power-down state.
Abstract:
Certain aspects of the present disclosure provide a method for performing parallel data processing, including: receiving data for parallel processing from a data processing requestor; generating a plurality of data sub-blocks; determining a plurality of data portions in each data sub-block of the plurality of data sub-blocks; changing an order of the plurality of data portions in at least one data sub-block of the plurality of data sub-blocks; providing the plurality of data sub-blocks, including the at least one data sub-block comprising the changed order of the plurality of data portions, to a plurality of processing units for parallel processing; and receiving processed data associated with the plurality of data sub-blocks from the plurality of processing units.
Abstract:
Disclosed are ticketed flow control mechanisms in a processing system with one or more masters and one or more slaves. In an aspect, a targeted slave receives a request from a requesting master. If the targeted slave is unavailable to service the request, a ticket for the request is provided to the requesting master. As resources in the targeted slave become available, messages are broadcasted for the requesting master to update the ticket value. When the ticket value has been updated to a final value, the requesting master may re-transmit the request.
Abstract:
A low-power state current/power consumption for each volatile memory device in a plurality of volatile memory devices is obtained. Data is copied from a first set of the volatile memory devices to a second set of the volatile memory devices, where the second set of volatile memory devices has a lower current/power consumption than the first set of volatile memory devices. Additionally, a current/power consumption may be obtained for each memory bank within each of the plurality of volatile memory devices. Data is then copied from a first set of memory banks to a second set of memory banks within the same memory device in the second set of memory devices, where the second set of memory banks has lower current/power consumption than the first set of memory banks. The first set of volatile memory devices and/or first set of memory banks are then placed into a power-down state.
Abstract:
In certain aspects, a method for sending data over a bus comprises: calculating a parity check code for a new data code, wherein the new data code comprises a number of bits in the new data code; calculating a Hamming distance between the new data code and a prior data code; and if the Hamming distance is greater than half of the number of bits in the new data code: inverting the new data code and the parity check code to obtain an inverted new data code and an inverted parity check code; and sending the inverted new data code and the inverted parity check code to the bus.
Abstract:
A low-power state current/power consumption for each volatile memory device in a plurality of volatile memory devices is obtained. Data is copied from a first set of the volatile memory devices to a second set of the volatile memory devices, where the second set of volatile memory devices has a lower current/power consumption than the first set of volatile memory devices. Additionally, a current/power consumption may be obtained for each memory bank within each of the plurality of volatile memory devices. Data is then copied from a first set of memory banks to a second set of memory banks within the same memory device in the second set of memory devices, where the second set of memory banks has lower current/power consumption than the first set of memory banks. The first set of volatile memory devices and/or first set of memory banks are then placed into a power-down state.