Abstract:
A method for classifying an object includes applying multiple confidence values to multiple objects. The method also includes determining a metric based on the multiple confidence values. The method further includes determining a classification of a first object from the multiple objects based on a knowledge-graph when the metric is above a threshold.
Abstract:
The balance of training data between classes for a machine learning model is modified. Adjustments are made at the gradient stage where selective backpropagation is utilized to modify a cost function to adjust or selectively apply the gradient based on the class example frequency in the data sets. The factor for modifying the gradient may be determined based on a ratio of the number of examples of the class with a fewest members to the number of examples of a present class. The gradient associated with the present class is modified based on the above determined factor.
Abstract:
A method for selecting a reduced number of model neurons in a neural network includes generating a first sparse set of non-zero decoding vectors. Each of the decoding vector is associated with a synapse between a first neuron layer and a second neuron layer. The method further includes implementing the neural network only with selected model neurons in the first neuron layer associated with the non-zero decoding vectors.
Abstract:
Methods and apparatus are provided for implementing spike-timing dependent plasticity (STDP) using windowing of spikes. One example method for operating an artificial nervous system generally includes recording spike times for a first artificial neuron, recording spike times for a second artificial neuron coupled to the first artificial neuron via a synapse, processing spikes for the second artificial neuron according to a window based at least in part on the spike times for the first artificial neuron, and updating a parameter (e.g., a weight or a delay) of the synapse based on the processing.
Abstract:
A method of training for image classification includes labelling a crop from an image including an object of interest. The crop may be labelled with an indication of whether the object of interest is framed, partially framed or not present in the crop. The method may also include assigning a fully framed class to the labelled crop, including the object of interest, if the object of interest is framed. A labelled crop may be assigned a partially framed class if the object of interest is partially framed. A background class may be assigned to a labelled crop if the object of interest is not present in the crop.
Abstract:
Multi-label classification is improved by determining thresholds and/or scale factors. Selecting thresholds for multi-label classification includes sorting a set of label scores associated with a first label to create an ordered list. Precision and recall values are calculated corresponding to a set of candidate thresholds from score values. The threshold is selected from the candidate thresholds for the first label based on target precision values or recall values. A scale factor is also selected for an activation function for multi-label classification where a metric of scores within a range is calculated. The scale factor is adjusted when the metric of scores are not within the range.
Abstract:
A method of online training of a classifier includes determining a distance from one or more feature vectors of an object to a first predetermined decision boundary established during off-line training for the classifier. The method also includes updating a decision rule as a function of the distance. The method further includes classifying a future example based on the updated decision rule.
Abstract:
Compressing a machine learning network, such as a neural network, includes replacing one layer in the neural network with compressed layers to produce the compressed network. The compressed network may be fine-tuned by updating weight values in the compressed layer(s).
Abstract:
A method of generating a classifier model includes distributing a common feature model to two or more users. Multiple classifiers are trained on top of the common feature model. The method further includes distributing a first classifier of the multiple classifiers to a first user and a second classifier of the multiple classifiers to a second user.
Abstract:
A method of learning a model includes receiving model updates from one or more users. The method also includes computing an updated model based on a previous model and the model updates. The method further includes transmitting data related to a subset of the updated model to the a user(s) based on the updated model.