Abstract:
A method of reducing computational complexity for a fixed point neural network operating in a system having a limited bit width in a multiplier-accumulator (MAC) includes reducing a number of bit shift operations when computing activations in the fixed point neural network. The method also includes balancing an amount of quantization error and an overflow error when computing activations in the fixed point neural network.
Abstract:
Certain aspects of the present disclosure support techniques for time synchronization of spiking neuron models that utilize multiple nodes. According to certain aspects, a neural model (e.g., of an artificial nervous system) may be implemented using a plurality of processing nodes, each processing node implementing a neuron model and communicating via the exchange of spike packets carrying information regarding spike information for artificial neurons. A mechanism may be provided for maintaining relative spike-timing between the processing nodes. In some cases, a mechanism may also be provided to alleviate deadlock conditions between the multiple nodes.