Abstract:
Provided is an image recognition device. The image recognition device includes a frame data change detector that sequentially receives a plurality of frame data and detects a difference between two consecutive frame data, an ensemble section controller that sets an ensemble section in the plurality of frame data, based on the detected difference, an image recognizer that sequentially identifies classes respectively corresponding to a plurality of section frame data by applying different neural network classifiers to the plurality of section frame data in the ensemble section, and a recognition result classifier that sequentially identifies ensemble classes respectively corresponding to the plurality of section frame data by combining the classes in the ensemble section.
Abstract:
The neuromorphic arithmetic device comprises an input monitoring circuit that outputs a monitoring result by monitoring that first bits of at least one first digit of a plurality of feature data and a plurality of weight data are all zeros, a partial sum data generator that skips an arithmetic operation that generates a first partial sum data corresponding to the first bits of a plurality of partial sum data in response to the monitoring result while performing the arithmetic operation of generating the plurality of partial sum data, based on the plurality of feature data and the plurality of weight data, and a shift adder that generates the first partial sum data with a zero value and result data, based on second partial sum data except for the first partial sum data among the plurality of partial sum data and the first partial sum data generated with the zero value.
Abstract:
A method for controlling a memory from which data is transferred to a neural network processor and an apparatus thereof are provided, the method including: generating prefetch information of data by using a blob descriptor and a reference prediction table after history information is input; reading the data in the memory based on the pre-fetch information and temporarily archiving read data in a prefetch buffer; and accessing next data in the memory based on the prefetch information and temporarily archiving the next data in the prefetch buffer after the data is transferred to the neural network from the prefetch buffer.
Abstract:
An artificial neural network apparatus and an operating method including a plurality of layer processors for performing operations on input data are disclosed. The artificial neural network apparatus may include: a flag layer processor for outputting a flag according to a comparison result between a pooling output value of a current frame and a pooling output value of a previous frame; and a controller for stopping operation of a layer processor which performs operations after the flag layer processor among the plurality of layer processors when the flag is outputted from the flag layer processor, wherein the flag layer processor is a layer processor that performs a pooling operation first among the plurality of layer processors.
Abstract:
An embodiment of the present invention provides a quantization method for weights of a plurality of batch normalization layers, including: receiving a plurality of previously learned first weights of the plurality of batch normalization layers; obtaining first distribution information of the plurality of first weights; performing a first quantization on the plurality of first weights using the first distribution information to obtain a plurality of second weights; obtaining second distribution information of the plurality of second weights; and performing a second quantization on the plurality of second weights using the second distribution information to obtain a plurality of final weights, and thereby reducing an error that may occur when quantizing the weight of the batch normalization layer.
Abstract:
Provided is a convolutional neural network system including a data selector configured to output an input value corresponding to a position of a sparse weight from among input values of input data on a basis of a sparse index indicating the position of a nonzero value in a sparse weight kernel, and a multiply-accumulate (MAC) computator configured to perform a convolution computation on the input value output from the data selector by using the sparse weight kernel.
Abstract:
When precoding information corresponding to data items of respective layers to be transmitted is received from an upper layer, an encoding apparatus of a multiple input multiple output (MIMO) communication system selects a precoding matrix among a plurality of precoding matrices stored in a storage using the precoding information and precodes the data items of the respective layers by simple operations consisting of at least one operation combination of addition, subtraction, selection, and inversion operations in accordance with a kind of the selected precoding matrix and a precoding operation pattern.