Abstract:
Disclosed are a deep neural network lightweight device based on batch normalization, and a method thereof. The deep neural network lightweight device based on batch normalization includes a memory that stores at least one data and at least one processor that executes a network lightweight module. When executing the network lightweight module, the processor performs learning on an input neural network based on sparsity regularization to adaptively determine at least one parameter of the sparsity regularization, performs pruning on the learning result, and performs fine tuning on the pruning result.
Abstract:
A method and system for extracting a visual descriptor using a feature selection are provided. The system includes an image input unit configured to receive an image, a candidate feature point group detecting unit configured to detect a point having a local maximum or minimum of local region filtering in scale-space images as being included in a candidate feature point group, a feature point selecting unit configured to calculate an importance for each candidate feature point, depending on its characteristics, select the candidate feature point as a feature point when its importance is greater than the predetermined threshold value, a dominant orientation calculating unit configured to calculate a dominant orientation of the selected feature point and a visual descriptor extracting unit configured to extract a patch for each feature point, according to its scale, location and dominant orientation, and extract a visual descriptor from the patch.
Abstract:
Provided are a method and system for training a dynamic deep neural network. The method for training a dynamic deep neural network includes receiving an output of a last layer of the deep neural network and outputting a first loss, receiving an output of a routing module according to an input class of the deep neural network and outputting a second loss, calculating a third loss based on the first loss and the second loss, and updating a weight of the deep neural network by using the third loss.
Abstract:
An apparatus and method for searching a neural network architecture may be disclosed. The apparatus may include an architecture searcher and an architecture evaluator. The architecture searcher may search for a topology between nodes included in a basic cell of a network, search for an operation to be applied between the nodes after searching for the topology, and determine the basic cell. The architecture evaluator may evaluate performance of the determined basic cell.