METHOD AND DEVICE FOR ASCERTAINING A GRADIENT OF A DATA-BASED FUNCTION MODEL
    11.
    发明申请
    METHOD AND DEVICE FOR ASCERTAINING A GRADIENT OF A DATA-BASED FUNCTION MODEL 审中-公开
    用于计算基于数据的函数模型的梯度的方法和装置

    公开(公告)号:US20150154329A1

    公开(公告)日:2015-06-04

    申请号:US14558544

    申请日:2014-12-02

    Abstract: In a method for calculating a gradient of a data-based function model, having one or multiple accumulated data-based partial function models, e.g., Gaussian process models, a model calculation unit is provided, which is designed to calculate function values of the data-based function model having an exponential function, summation functions, and multiplication functions in two loop operations in a hardware-based way, the model calculation unit being used to calculate the gradient of the data-based function model for a desired value of a predefined input variable.

    Abstract translation: 在用于计算基于数据的函数模型的梯度的方法中,具有一个或多个基于累积的基于数据的部分函数模型,例如高斯过程模型,提供了一种模型计算单元,其被设计为计算数据的函数值 基于硬件的方式在两个循环操作中具有指数函数,求和函数和乘法函数的模型函数模型,该模型计算单元用于计算预定义的期望值的基于数据的函数模型的梯度 输入变量。

    COMPUTER-IMPLEMENTED METHOD FOR TRAINING A NEURAL NETWORK USING MTL

    公开(公告)号:US20250068905A1

    公开(公告)日:2025-02-27

    申请号:US18808236

    申请日:2024-08-19

    Abstract: A computer-implemented method for training a neural network, wherein the network performs multiple tasks and is trained to solve the tasks. The method includes: collecting data as input values; defining the network architecture including multiple subnetworks, wherein each subnetwork performs a task; defining a loss function for each task; determining an overall loss function that summarizes the loss functions of the individual tasks; determining an optimization method for the overall loss function; training the network, wherein the training comprises minimizing the overall loss function, wherein the minimization of the overall loss function is carried out according to the optimization method; providing the neural network; wherein the overall loss function includes a trainable weighting factor and a regularization term for each loss function, wherein the regularization term is minimal for a particular weighting factor.

    Processing of learning data sets including noisy labels for classifiers

    公开(公告)号:US12236672B2

    公开(公告)日:2025-02-25

    申请号:US17233410

    申请日:2021-04-16

    Abstract: A method for processing of learning data sets for a classifier. The method includes: processing learning input variable values of at least one learning data set multiple times in a non-congruent manner by one or multiple classifier(s) trained up to an epoch E2 so that they are mapped to different output variable values; ascertaining a measure for the uncertainty of these output variable values from the deviations of these output variable values; in response to the uncertainty meeting a predefined criterion, ascertaining at least one updated learning output variable value for the learning data set from one or multiple further output variable value(s) to which the classifier or the classifiers map(s) the learning input variable values after a reset to an earlier training level with epoch E1

    METHOD AND DEVICE FOR ASCERTAINING AN EXPLANATION MAP

    公开(公告)号:US20210342653A1

    公开(公告)日:2021-11-04

    申请号:US17261758

    申请日:2019-07-03

    Abstract: A method for ascertaining an explanation map of an image, in which all those pixels of the image are changed which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is selected in such a way that a smallest possible subset of the pixels of the image are changed, and the explanation map preferably does not lead to the same classification result as the image when it is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.

    METHOD AND DEVICE FOR ASCERTAINING AN EXPLANATION MAP

    公开(公告)号:US20210279529A1

    公开(公告)日:2021-09-09

    申请号:US17261810

    申请日:2019-07-03

    Abstract: A method for ascertaining an explanation map of an image. All those pixels of the image are highlighted which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is being selected in such a way that it selects a smallest possible subset of the pixels of the image as relevant. The explanation map leads to the same classification result as the image when the explanation map is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.

Patent Agency Ranking