SEQUENTIAL IMAGE SAMPLING AND STORAGE OF FINE-TUNED FEATURES
    1.
    发明申请
    SEQUENTIAL IMAGE SAMPLING AND STORAGE OF FINE-TUNED FEATURES 审中-公开
    顺序图像采样和存储微调特征

    公开(公告)号:US20160283864A1

    公开(公告)日:2016-09-29

    申请号:US14845236

    申请日:2015-09-03

    Abstract: Feature extraction includes determining a reference model for feature extraction and fine-tuning the reference model for different tasks. The method also includes storing a set of weight differences calculated during the fine-tuning. Each set may correspond to a different task.

    Abstract translation: 特征提取包括确定特征提取的参考模型,并对不同任务的参考模型进行微调。 该方法还包括存储在微调期间计算出的一组重量差。 每个集合可以对应于不同的任务。

    BLINK AND AVERTED GAZE AVOIDANCE IN PHOTOGRAPHIC IMAGES
    2.
    发明申请
    BLINK AND AVERTED GAZE AVOIDANCE IN PHOTOGRAPHIC IMAGES 有权
    摄影图像中的黑色和平均大小避免

    公开(公告)号:US20150256741A1

    公开(公告)日:2015-09-10

    申请号:US14520710

    申请日:2014-10-22

    CPC classification number: H04N5/23222 G06K9/00597 G06K9/00604 H04N5/23219

    Abstract: A method of blink and averted gaze avoidance with a camera includes detecting an averted gaze of a subject and/or one or more closed eyes of the subject in response to receiving an input to actuate a camera shutter. The method also includes scheduling actuation of the camera shutter to a future estimated time period to capture an image of the subject when a gaze direction of the subject is centered on the camera and/or both eyes of the subject are open.

    Abstract translation: 使用照相机的眨眼和避免凝视回避的方法包括响应于接收到用于致动照相机快门的输入来检测被摄体和/或被摄体的一个或多个闭合眼睛的避开的视线。 该方法还包括当照相机和/或被摄体的双眼打开时,被摄体的视线方向为中心,将照相机快门调度到将来的估计时间段以捕获被摄体的图像。

    SELECTIVE BACKPROPAGATION
    6.
    发明申请

    公开(公告)号:US20170091619A1

    公开(公告)日:2017-03-30

    申请号:US15081780

    申请日:2016-03-25

    CPC classification number: G06N3/084 G06K9/4628 G06N3/0472

    Abstract: The balance of training data between classes for a machine learning model is modified. Adjustments are made at the gradient stage where selective backpropagation is utilized to modify a cost function to adjust or selectively apply the gradient based on the class example frequency in the data sets. The factor for modifying the gradient may be determined based on a ratio of the number of examples of the class with a fewest members to the number of examples of a present class. The gradient associated with the present class is modified based on the above determined factor.

    FEATURE SELECTION FOR RETRAINING CLASSIFIERS
    8.
    发明申请
    FEATURE SELECTION FOR RETRAINING CLASSIFIERS 审中-公开
    重新选择分类器的特征选择

    公开(公告)号:US20160275414A1

    公开(公告)日:2016-09-22

    申请号:US14838333

    申请日:2015-08-27

    CPC classification number: G06N20/00 G06F16/51 G06K9/6256 G06K9/6267 G06N3/0454

    Abstract: A method of managing memory usage of a stored training set for classification includes calculating one or both of a first similarity metric and a second similarity metric. The first similarity metric is associated with a new training sample and existing training samples of a same class as the new training sample. The second similarity metric is associated with the new training sample and existing training samples of a different class than the new training sample. The method also includes selectively storing the new training sample in memory based on the first similarity metric, and/or the second similarity metric.

    Abstract translation: 管理存储的用于分类的训练集合的存储器使用的方法包括计算第一相似性度量和第二相似性度量中的一个或两个。 第一个相似性度量与新的培训样本和与新培训样本相同类别的现有培训样本相关联。 第二个相似性度量与新的培训样本和不同于新培训样本的现有培训样本相关联。 该方法还包括基于第一相似度度量和/或第二相似性度量来选择性地将新的训练样本存储在存储器中。

    INCORPORATING TOP-DOWN INFORMATION IN DEEP NEURAL NETWORKS VIA THE BIAS TERM
    10.
    发明申请
    INCORPORATING TOP-DOWN INFORMATION IN DEEP NEURAL NETWORKS VIA THE BIAS TERM 审中-公开
    通过BIAS TERM进入深层神经网络中的顶层信息

    公开(公告)号:US20160321542A1

    公开(公告)日:2016-11-03

    申请号:US14848288

    申请日:2015-09-08

    CPC classification number: G06N3/088 G06N3/0481 G06N7/005

    Abstract: A method of biasing a deep neural network includes determining whether an element has an increased probability of being present in an input to the network. The method also includes adjusting a bias of activation functions of neurons in the network to increase sensitivity to the element. In one configuration, the bias is adjusted without adjusting weights of the network. The method further includes adjusting an output of the network based on the biasing.

    Abstract translation: 偏置深层神经网络的方法包括确定元素在网络的输入中是否具有增加的存在概率。 该方法还包括调整网络中神经元的激活函数的偏差以增加对该元件的敏感性。 在一个配置中,调整偏差而不调整网络的权重。 该方法还包括基于偏置来调整网络的输出。

Patent Agency Ranking