Abstract:
Techniques related to implementing convolutional neural networks for object recognition are discussed. Such techniques may include generating a set of binary neural features via convolutional neural network layers based on input image data and applying a strong classifier to the set of binary neural features to generate an object label for the input image data.
Abstract:
Stereo image reconstruction techniques are described. An image from a root viewpoint is translated to an image from another viewpoint. Homography fitting is used to translate the image between viewpoints. Inverse compositional image alignment is used to determine a homography matrix and determine a pixel in the translated image.
Abstract:
Disclosed in some examples are various modifications to the shape regression technique for use in real-time applications, and methods, systems, and machine readable mediums which utilize the resulting facial landmark tracking methods.
Abstract:
Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
Abstract:
Techniques related to compressing a pre-trained dense deep neural network to a sparsely connected deep neural network for efficient implementation are discussed. Such techniques may include iteratively pruning and splicing available connections between adjacent layers of the deep neural network and updating weights corresponding to both currently disconnected and currently connected connections between the adjacent layers.
Abstract:
A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.
Abstract:
Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
Abstract:
Technology to conduct image sequence/video analysis can include a processor, and a memory coupled to the processor, the memory storing a neural network, the neural network comprising a plurality of convolution layers, and a plurality of normalization layers arranged as a relay structure, wherein each normalization layer is coupled to and following a respective one of the plurality of convolution layers. The plurality of normalization layers can be arranged as a relay structure where a normalization layer for a layer (k) is coupled to and following a normalization layer for a preceding layer (k−1). The normalization layer for the layer (k) is coupled to the normalization layer for the preceding layer (k−1) via a hidden state signal and a cell state signal, each signal generated by the normalization layer for the preceding layer (k−1). Each normalization layer (k) can include a meta-gating unit (MGU) structure.
Abstract:
Embodiments are directed to a composite binary decomposition network. An embodiment of a computer-readable storage medium includes executable computer program instructions for transforming a pre-trained first neural network into a binary neural network by processing layers of the first neural network in a composite binary decomposition process, where the first neural network having floating point values representing weights of various layers of the first neural network. The composite binary decomposition process includes a composite operation to expand real matrices or tensors into a plurality of binary matrices or tensors, and a decompose operation to decompose one or more binary matrices or tensors of the plurality of binary matrices or tensors into multiple lower rank binary matrices or tensors.
Abstract:
Systems, apparatuses and methods may provide for conducting an importance measurement of a plurality of parameters in a trained neural network and setting a subset of the plurality of parameters to zero based on the importance measurement. Additionally, the pruned neural network may be re-trained. In one example, conducting the importance measurement includes comparing two or more parameter values that contain covariance matrix information.