HYPER-PARAMETER SELECTION FOR DEEP CONVOLUTIONAL NETWORKS
    1.
    发明申请
    HYPER-PARAMETER SELECTION FOR DEEP CONVOLUTIONAL NETWORKS 审中-公开
    深层调节网络的参数选择

    公开(公告)号:US20160224903A1

    公开(公告)日:2016-08-04

    申请号:US14848296

    申请日:2015-09-08

    CPC classification number: G06N99/005 G06N3/08 G06N3/082 G06N7/005

    Abstract: Hyper-parameters are selected for training a deep convolutional network by selecting a number of network architectures as part of a database. Each of the network architectures includes one or more local logistic regression layer and is trained to generate a corresponding validation error that is stored in the database. A threshold error for identifying a good set of network architectures and a bad set of network architectures may be estimated based on validation errors in the database. The method also includes choosing a next potential hyper-parameter, corresponding to a next network architecture, based on a metric that is a function of the good set of network architectures. The method further includes selecting a network architecture, from among next network architectures, with a lowest validation error.

    Abstract translation: 选择超参数以通过选择多个网络架构作为数据库的一部分来训练深卷积网络。 每个网络架构包括一个或多个本地逻辑回归层,并被训练以产生存储在数据库中的对应验证错误。 可以基于数据库中的验证错误来估计用于识别良好的一组网络架构和一组坏的网络架构的阈值误差。 该方法还包括基于作为良好网络体系结构的函数的度量来选择对应于下一个网络体系结构的下一个潜在的超参数。 该方法还包括从下一个网络体系结构中选择具有最低验证错误的网络架构。

    CUSTOMIZED CLASSIFIER OVER COMMON FEATURES
    2.
    发明申请
    CUSTOMIZED CLASSIFIER OVER COMMON FEATURES 审中-公开
    自定义分类器通用功能

    公开(公告)号:US20150324689A1

    公开(公告)日:2015-11-12

    申请号:US14483075

    申请日:2014-09-10

    CPC classification number: G06N3/049 G06N3/0454 G06N3/08

    Abstract: A method of updating a set of classifiers includes applying a first set of classifiers to a first set of data. The method further includes requesting, from a remote device, a classifier update based on an output of the first set of classifiers or a performance measure of the application of the first set of classifiers.

    Abstract translation: 更新一组分类器的方法包括将第一组分类器应用于第一组数据。 该方法还包括从远程设备请求基于第一组分类器的输出的分类器更新或第一组分类器的应用的性能测量。

    DIFFERENTIAL ENCODING IN NEURAL NETWORKS
    3.
    发明申请
    DIFFERENTIAL ENCODING IN NEURAL NETWORKS 审中-公开
    神经网络中的差分编码

    公开(公告)号:US20150269481A1

    公开(公告)日:2015-09-24

    申请号:US14513155

    申请日:2014-10-13

    CPC classification number: G06N3/0445 G06N3/0481 G06N3/049

    Abstract: Differential encoding in a neural network includes predicting an activation value for a neuron in the neural network based on at least one previous activation value for the neuron. The encoding further includes encoding a value based on a difference between the predicted activation value and an actual activation value for the neuron in the neural network.

    Abstract translation: 神经网络中的差分编码包括基于神经元的至少一个先前激活值来预测神经网络中神经元的激活值。 编码还包括基于预测的激活值和神经网络中的神经元的实际激活值之间的差来编码值。

    TEMPORAL SPIKE ENCODING FOR TEMPORAL LEARNING
    5.
    发明申请
    TEMPORAL SPIKE ENCODING FOR TEMPORAL LEARNING 审中-公开
    用于时间学习的TEMPORAL SPIKE编码

    公开(公告)号:US20150317557A1

    公开(公告)日:2015-11-05

    申请号:US14315531

    申请日:2014-06-26

    CPC classification number: G06N3/049 G06N3/08

    Abstract: Certain aspects of the present disclosure support methods and apparatus for temporal spike encoding for temporal learning in an artificial nervous system. The temporal spike encoding for temporal learning can comprise obtaining sensor data being input into the artificial nervous system, processing the sensor data to generate feature vectors, converting element values of the feature vectors into delays, and causing at least one artificial neuron of the artificial nervous system to spike at times based on the delays.

    Abstract translation: 本公开的某些方面支持用于人造神经系统中的时间学习的时间尖峰编码的方法和装置。 用于时间学习的时间尖峰编码可以包括获得输入到人造神经系统的传感器数据,处理传感器数据以产生特征向量,将特征向量的元素值转换为延迟,并且引起人造神经的至少一个人造神经元 系统根据延迟时间飙升。

    CONVERSION OF NEURON TYPES TO HARDWARE
    6.
    发明申请
    CONVERSION OF NEURON TYPES TO HARDWARE 审中-公开
    神经类型转换为硬件

    公开(公告)号:US20150269479A1

    公开(公告)日:2015-09-24

    申请号:US14286277

    申请日:2014-05-23

    CPC classification number: G06N3/0481

    Abstract: Certain aspects of the present disclosure support a method and apparatus for conversion of neuron types to a hardware implementation of an artificial nervous system. According to certain aspects, at least one of synapse weights of the artificial nervous system, neuron input channel resistances associated with a neuron model for neuron instances of the artificial nervous system, or neuron input channel potentials associated with the neuron model can be normalized by one or more factors. A linear transformation can be determined for mapping of parameters of the neuron model. Then, the linear transformation can be applied to the parameters of the neuron model to obtain transformed parameters of the neuron model, and at least one of inputs to the neuron instances or dynamics of the neuron model based may be updated based at least in part on the transformed parameters.

    Abstract translation: 本公开的某些方面支持用于将神经元类型转换为人造神经系统的硬件实现的方法和装置。 根据某些方面,人造神经系统的突触重量中的至少一个,与人造神经系统的神经元实例的神经元模型相关联的神经元输入通道电阻或与神经元模型相关联的神经元输入通道电位可以被一个 或更多因素。 可以确定神经元模型的参数映射的线性变换。 然后,线性变换可以应用于神经元模型的参数以获得神经元模型的变换参数,并且至少基于神经元模型的神经元实例或动力学的输入中的至少一个可以至少部分地基于 变换参数。

    NEURAL NETWORK ADAPTATION TO CURRENT COMPUTATIONAL RESOURCES
    7.
    发明申请
    NEURAL NETWORK ADAPTATION TO CURRENT COMPUTATIONAL RESOURCES 审中-公开
    神经网络适应当前的计算资源

    公开(公告)号:US20150248609A1

    公开(公告)日:2015-09-03

    申请号:US14268372

    申请日:2014-05-02

    CPC classification number: G06N3/082 G06N3/0481

    Abstract: Methods and apparatus are provided for processing in an artificial nervous system. According to certain aspects, resolution of one or more functions performed by processing units of a neuron model may be reduced, based at least in part on availability of computational resources or a power target or budget. The reduction in resolution may be compensated for by adjusting one or more network weights.

    Abstract translation: 提供了用于在人造神经系统中加工的方法和装置。 根据某些方面,可以至少部分地基于计算资源的可用性或功率目标或预算来减少由神经元模型的处理单元执行的一个或多个功能的分辨率。 可以通过调整一个或多个网络权重来补偿分辨率的降低。

    BIT WIDTH SELECTION FOR FIXED POINT NEURAL NETWORKS
    8.
    发明申请
    BIT WIDTH SELECTION FOR FIXED POINT NEURAL NETWORKS 审中-公开
    固定点神经网络的位宽选择

    公开(公告)号:US20160328647A1

    公开(公告)日:2016-11-10

    申请号:US14936594

    申请日:2015-11-09

    CPC classification number: G06N3/08 G06F17/11 G06N3/063 G06N3/10

    Abstract: A method for selecting bit widths for a fixed point machine learning model includes evaluating a sensitivity of model accuracy to bit widths at each computational stage of the model. The method also includes selecting a bit width for parameters, and/or intermediate calculations in the computational stages of the mode. The bit width for the parameters and the bit width for the intermediate calculations may be different. The selected bit width may be determined based on the sensitivity evaluation.

    Abstract translation: 用于选择固定点机器学习模型的位宽度的方法包括在模型的每个计算阶段评估模型精度对位宽度的灵敏度。 该方法还包括在模式的计算阶段中选择参数的位宽度和/或中间计算。 参数的位宽和中间计算的位宽可能不同。 可以基于灵敏度评估来确定所选择的位宽度。

    FIXED POINT NEURAL NETWORK BASED ON FLOATING POINT NEURAL NETWORK QUANTIZATION
    9.
    发明申请
    FIXED POINT NEURAL NETWORK BASED ON FLOATING POINT NEURAL NETWORK QUANTIZATION 审中-公开
    基于浮动点神经网络定量的固定点神经网络

    公开(公告)号:US20160328646A1

    公开(公告)日:2016-11-10

    申请号:US14920099

    申请日:2015-10-22

    CPC classification number: G06N3/08 G06K9/4628 G06N3/04 G06N3/06 G06N3/10

    Abstract: A method of quantizing a floating point machine learning network to obtain a fixed point machine learning network using a quantizer may include selecting at least one moment of an input distribution of the floating point machine learning network. The method may also include determining quantizer parameters for quantizing values of the floating point machine learning network based at least in part on the at least one selected moment of the input distribution of the floating point machine learning network to obtain corresponding values of the fixed point machine learning network.

    Abstract translation: 使用量化器来量化浮点计算机学习网络以获得定点机器学习网络的方法可以包括选择浮点计算机学习网络的输入分布的至少一个时刻。 该方法还可以包括:至少部分地基于浮点机器学习网络的输入分布的至少一个选定时刻来确定用于量化浮点计算机学习网络的值的量化器参数,以获得固定点计算机的相应值 学习网络

Patent Agency Ranking