SIGMA-DELTA POSITION DERIVATIVE NETWORKS
    1.
    发明申请

    公开(公告)号:US20180336469A1

    公开(公告)日:2018-11-22

    申请号:US15705161

    申请日:2017-09-14

    CPC classification number: G06N3/084 G06N3/049 G06N3/063

    Abstract: A method for processing temporally redundant data in an artificial neural network (ANN) includes encoding an input signal, received at an initial layer of the ANN, into an encoded signal. The encoded signal comprises the input signal and a rate of change of the input signal. The method also includes quantizing the encoded signal into integer values and computing an activation signal of a neuron in a next layer of the ANN based on the quantized encoded signal. The method further includes computing an activation signal of a neuron at each layer subsequent to the next layer to compute a full forward pass of the ANN. The method also includes back propagating approximated gradients and updating parameters of the ANN based on an approximate derivative of a loss with respect to the activation signal.

    SPIKING MULTI-LAYER PERCEPTRON
    2.
    发明申请

    公开(公告)号:US20170228646A1

    公开(公告)日:2017-08-10

    申请号:US15252151

    申请日:2016-08-30

    CPC classification number: G06N3/084 G06F11/0721 G06F11/079 G06N3/049

    Abstract: A method of training a neural network with back propagation includes generating error events representing a gradient of a cost function for the neural network. The error events may be generated based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal. The method further includes updating the weights of the neural network based on the error events.

    TEMPORAL DIFFERENCE ESTIMATION IN AN ARTIFICIAL NEURAL NETWORK

    公开(公告)号:US20180121791A1

    公开(公告)日:2018-05-03

    申请号:US15590609

    申请日:2017-05-09

    CPC classification number: G06N3/049

    Abstract: A method of computation in a deep neural network includes discretizing input signals and computing a temporal difference of the discrete input signals to produce a discretized temporal difference. The method also includes applying weights of a first layer of the deep neural network to the discretized temporal difference to create an output of a weight matrix. The output of the weight matrix is temporally summed with a previous output of the weight matrix. An activation function is applied to the temporally summed output to create a next input signal to a next layer of the deep neural network.

Patent Agency Ranking