Abstract:
A method for selecting an artificial intelligence (AI) model in neural architecture search, includes: measuring a scale of receptive field for a plurality of neural network layers corresponding to each of a plurality of candidate AI models; determining a first score for a first group of neural network layers among the plurality of neural network layers based on the scale of the receptive field for the first group of neural network layers, the scale of the receptive field for each of the first group of neural network layers being smaller than a size of an object; determining a second score for a second group of neural network layers among the plurality of neural network layers based on the scale of the receptive field for the second group of neural network layers, the scale of the receptive field for each of the second group of neural network layers being greater than the size of the object; determining a third score for each of the plurality of candidate AI models as a function of the first score and the second score; and selecting, based on the third score, a candidate AI model among the plurality of candidate AI models for training and deployment, the candidate AI model having a highest third score among the third scores of the plurality of candidate AI models.
Abstract:
According to an embodiment of the disclosure, a method may include providing, by the electronic device, a plurality of Pareto fronts based on at least two performance parameters. The method may include identifying, by the electronic device, an optimal Pareto front from among the plurality of Pareto fronts. The method may include providing, by the electronic device, a second AI model iteratively. The method may include identifying, by the electronic device, whether the second AI model belongs to the optimal Pareto front. The method may include identifying, by the electronic device, the at least two performance parameters corresponding to the second AI model based on identifying that the second AI model belongs to the optimal Pareto front. The method may include obtaining, by the electronic device, the second AI model based on identifying that the second AI model meets one or more predetermined performance parameters.
Abstract:
A method for mixed precision quantization of an artificial intelligence (AI) model by an electronic device is included. The method includes performing, by the electronic device, perturbation in weights of each layer of a plurality of layers of the AI model for a pre-defined number of times, determining, by the electronic device, a change in an output of each layer of a plurality of layers of the AI model based on a perturbation in weights of each layer of the plurality of layers, determining, by the electronic device, a sensitivity metric for each layer of the plurality of layers of the AI model as a measure of the change in the output of each layer, assigning, by the electronic device, a bit-precision to each layer of the plurality of layers of the AI model based on the determined sensitivity metric, and performing, by the electronic device, the mixed precision quantization of the AI model using the bit-precision assigned to each layer of the plurality of layers of the AI model.
Abstract:
A method and an electronic device for neuro-symbolic learning of an artificial intelligence (AI) model are provided. The method includes receiving input data including various contents and determining in an output of the AI model a predicted probability for each of the contents of the input data, determining a neural loss of the AI model by comparing the predicted probability with a predefined desired probability, determining a symbolic loss for the AI model by comparing the predicted probability with a pre-determined undesired probability, determining weights of a plurality of layers of the AI model, and updating the weights of the plurality of layers of the AI model based on the neural loss and the symbolic loss.
Abstract:
Various embodiments of the disclosure disclose a method for quantizing a Deep Neural Network (DNN) model in an electronic device. The method includes: estimating, by the electronic device, an activation range of each layer of the DNN model using self-generated data (e.g. retro image, audio, video, etc.) and/or a sensitive index of each layer of the DNN model; quantizing, by the electronic device, the DNN model based on the activation range and/or the sensitive index; and allocating, by the electronic device, a dynamic bit precision for each channel of each layer of the DNN model to quantize the DNN model.
Abstract:
A method, an apparatus, and a system for configuring a neural network across heterogeneous processors are provided. The method includes creating a unified neural network profile for the plurality of processors; receiving at least one request to perform at least one task using the neural network; determining a type of the requested at least one task as one of an asynchronous task and a synchronous task; and parallelizing processing of the neural network across the plurality of processors to perform the requested at least one task, based on the type of the requested at least one task and the created unified neural network profile.
Abstract:
A method and a system for transmitting data from a first electronic device to a second electronic device using a human body as a signal transmission path, are provided. The method includes detecting a first touch event on the first electronic device, the first touch event corresponding to the data. The method further includes receiving indication of a second touch event that is detected on the second electronic device, the second touch event corresponding to a memory location in the second electronic device. The method further includes capacitively transmitting the data from the first electronic device to the memory location through the human body in response to the detecting the first touch event and the receiving the indication of the second touch event.