Abstract:
Provided is a reconfigurable processor that may process a first type of operation in first mode using a first group of functional units, and process a second type of operation in second mode using a second group of functional units. The reconfigurable processor may selectively supply power to either the first group or the second group, in response to a mode-switch signal or a mode-switch instruction.
Abstract:
An apparatus and method for compressing trace data is provided. The apparatus includes a detection unit configured to detect trace data corresponding to one or more function units performing a substantially significant operation in a reconfigurable processor as valid trace data, and a compression unit configured to compress the valid trace data.
Abstract:
An interrupt support determining apparatus and method for an equal-model processor, and a processor including the interrupt support determining apparatus are provided. The interrupt support determining apparatus determines whether an instruction input to a processor decoder is a multiple latency instruction, compares a current latency of the instruction with a remaining latency if the instruction is a multiple latency instruction, and updates the current latency to the remaining latency if the current latency is greater than the remaining latency.
Abstract:
Provided is a hardware debugging apparatus and method for a software-pipelined program. The hardware debugging apparatus and method overcome a currency problem caused during hardware debugging in the software-pipelined program by guarding certain execution blocks and restarting the processing of the software-pipelined program.
Abstract:
A swizzle pattern generator is provided to reduce an overhead due to execution of a swizzle instruction in vector processing. The swizzle pattern generator is configured to provide swizzle patterns with respect to data sets of at least one vector register or vector processing unit. The swizzle pattern generator may be reconfigurable to generate various swizzle patterns for different vector operations.
Abstract:
Provided are a reconfigurable processor and operating method thereof. The reconfigurable processor may use a configuration memory distributed to each operation unit. The distributed configuration memory may be separated into a distributed operation configuration memory including configuration information about an operation of a function unit, and a distributed routing configuration memory including configuration information about routing. The distributed operation configuration memory may be activated according to a predicate signal.
Abstract:
A format conversion apparatus which converts image data of a band interleave format into image data of a band separate format is provided. The apparatus includes a memory which stores image data of a band interleave format; and a converting module which reads the memory by increasing a read address of the memory for each stride, and converts the image data of the band interleave format into image data of a band separate format.
Abstract:
A pipeline processor which meets a latency restriction on an equal model is provided. The pipeline processor includes a pipeline processing unit to process an instruction at a plurality of stages and an equal model compensator to store the results of the processing of some or all of the instructions located in the pipeline processing unit and to write the results of the processing in a register file based on the latency of each instruction.
Abstract:
A computing apparatus and method of handling an interrupt are provided. The computing apparatus includes a coarse-grained array, a host processor, and an interrupt supervisor. When an interrupt occurs in the coarse-grained array while performing a loop operation, the host processor processes the interrupt, and the interrupt supervisor may perform mode switching between the coarse-grained array and the host processor.
Abstract:
A pipeline processor which meets a latency restriction on an equal model is provided. The pipeline processor includes a pipeline processing unit to process an instruction at a plurality of stages and an equal model compensator to store the results of the processing of some or all of the instructions located in the pipeline processing unit and to write the results of the processing in a register file based on the latency of each instruction.