-
公开(公告)号:US20200175356A1
公开(公告)日:2020-06-04
申请号:US16689377
申请日:2019-11-20
Applicant: Robert Bosch GmbH
Inventor: Jan Mathias Koehler , Rolf Michael Koehler
Abstract: An encoder, connectable to a data-memory, for storing numerical values in the data-memory, which lie in a value range between a predefined-minimum-value and a predefined-maximum-value, the encoder including an assignment instruction, according to which the value range is subdivided into multiple discrete intervals, and the encoder being configured to classify a numerical value to be stored in exactly one interval and to output an identifier of this interval, the intervals varying in width on the scale of the numerical values. A decoder for numerical values, which are stored in a data-memory using an encoder, to assign according to one assignment instruction an identifier of a discrete interval retrieved from the data-memory a fixed numerical value belonging to this interval and to output it. Also described are an AI module including an ANN, an encoder and a decoder, and a method for manufacturing the AI module, and an associated computer program.
-
公开(公告)号:US11645828B2
公开(公告)日:2023-05-09
申请号:US17261758
申请日:2019-07-03
Applicant: Robert Bosch GmbH
Inventor: Joerg Wagner , Tobias Gindele , Jan Mathias Koehler , Jakob Thaddaeus Wiedemer , Leon Hetzel
IPC: G06V10/44 , G06F18/2413 , G06F18/213 , G06V10/764 , G06V10/82
CPC classification number: G06V10/454 , G06F18/213 , G06F18/24133 , G06V10/764 , G06V10/82
Abstract: A method for ascertaining an explanation map of an image, in which all those pixels of the image are changed which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is selected in such a way that a smallest possible subset of the pixels of the image are changed, and the explanation map preferably does not lead to the same classification result as the image when it is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.
-
公开(公告)号:US20220147869A1
公开(公告)日:2022-05-12
申请号:US17420357
申请日:2020-04-08
Applicant: Robert Bosch GmbH
Inventor: Jan Mathias Koehler , Maximilian Autenrieth , William Harris Beluch
Abstract: A method for training a trainable module. A plurality of modifications of the trainable module, which differ from one another enough that they are not congruently merged into one another with progressive learning, are each pretrained using a subset of the learning data sets. Learning input variable values of a learning data set are supplied to all modifications as input variables; from the deviation of the output variable values, into which the modifications each convert the learning input variable values, from one another, a measure of the uncertainty of these output variable values is ascertained and associated with the learning data set as its uncertainty. Based on the uncertainty, an assessment of the learning data set is ascertained, which is a measure of the extent to which the association of the learning output variable values with the learning input variable values in the learning data set is accurate.
-
公开(公告)号:US20250045578A1
公开(公告)日:2025-02-06
申请号:US18774344
申请日:2024-07-16
Applicant: ROBERT BOSCH GmbH , CARIAD SE
Inventor: Lukas Schott , Jan Mathias Koehler , Claudia Blaiotta
IPC: G06N3/08
Abstract: The invention relates to a method (100) for training a machine learning model for application for a machine, comprising the following training steps: providing (101) training data, the training data being specific for the application of the machine learning model, initiating (102) processing of the training data, in which multiple tasks are processed by the machine learning model concurrently, determining (103) losses for the individual tasks, the particular loss being based on a difference between the output generated by the machine learning model and a default, weighting (104) the determined losses, the weighting being carried out using at least one task-specific uncertainty based on an analytical computation, updating (105) weights of the machine learning model, based on the weighted losses, for the training of the machine learning model.
-
公开(公告)号:US11783190B2
公开(公告)日:2023-10-10
申请号:US17261810
申请日:2019-07-03
Applicant: Robert Bosch GmbH
Inventor: Joerg Wagner , Tobias Gindele , Jan Mathias Koehler , Jakob Thaddaeus Wiedemer , Leon Hetzel
IPC: G06N3/08 , G06N3/084 , G06N3/04 , G06F18/241 , G06F18/40 , G06V10/764 , G06V10/82 , G06V10/44
CPC classification number: G06N3/084 , G06F18/241 , G06F18/41 , G06N3/04 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V2201/03
Abstract: A method for ascertaining an explanation map of an image. All those pixels of the image are highlighted which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is being selected in such a way that it selects a smallest possible subset of the pixels of the image as relevant. The explanation map leads to the same classification result as the image when the explanation map is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.
-
公开(公告)号:US11531888B2
公开(公告)日:2022-12-20
申请号:US16757186
申请日:2018-10-15
Applicant: Robert Bosch GmbH
Inventor: Jan Achterhold , Jan Mathias Koehler , Tim Genewein
Abstract: A method for creating a deep neural network. The deep neural network includes a plurality of layers and connections having weights, and the weights in the created deep neural network are able to assume only predefinable discrete values from a predefinable list of discrete values. The method includes: providing at least one training input variable for the deep neural network; ascertaining a variable characterizing a cost function, which includes a first variable, which characterizes a deviation of an output variable of the deep neural network ascertained as a function of the provided training input variable relative to a predefinable setpoint output variable, and the variable characterizing the cost function further including at least one penalization variable, which characterizes a deviation of a value of one of the weights from at least one of at least two of the predefinable discrete values; training the deep neural network.
-
公开(公告)号:US11488006B2
公开(公告)日:2022-11-01
申请号:US16689377
申请日:2019-11-20
Applicant: Robert Bosch GmbH
Inventor: Jan Mathias Koehler , Rolf Michael Koehler
Abstract: An encoder, connectable to a data-memory, for storing numerical values in the data-memory, which lie in a value range between a predefined-minimum-value and a predefined-maximum-value, the encoder including an assignment instruction, according to which the value range is subdivided into multiple discrete intervals, and the encoder being configured to classify a numerical value to be stored in exactly one interval and to output an identifier of this interval, the intervals varying in width on the scale of the numerical values. A decoder for numerical values, which are stored in a data-memory using an encoder, to assign according to one assignment instruction an identifier of a discrete interval retrieved from the data-memory a fixed numerical value belonging to this interval and to output it. Also described are an AI module including an ANN, an encoder and a decoder, and a method for manufacturing the AI module, and an associated computer program.
-
公开(公告)号:US20220230054A1
公开(公告)日:2022-07-21
申请号:US17611088
申请日:2020-06-10
Applicant: Robert Bosch GmbH
Inventor: Jan Mathias Koehler , Maximilian Autenrieth , William Harris Beluch
Abstract: A method for operating a trainable module. At least one input variable value is supplied to variations of the trainable module, the variations differing so much from each other, that they may not be converted into each other in a congruent manner, using progressive learning. A measure of the uncertainty of the output variable values is ascertained from the difference of the output variable values, into which the variations translate, in each instance, the input variable value. The uncertainty is compared to a distribution of uncertainties, which is ascertained for input variable learning values used during training of the trainable module and/or for further input variable test values, to which relationships learned during the training of the trainable module are applicable. The extent to which the relationships learned during the training of the trainable module are applicable to the input variable value, is evaluated from the result of the comparison.
-
公开(公告)号:US20210342650A1
公开(公告)日:2021-11-04
申请号:US17233410
申请日:2021-04-16
Applicant: Robert Bosch GmbH
Inventor: William Harris Beluch , Jan Mathias Koehler , Maximilian Autenrieth
Abstract: A method for processing of learning data sets for a classifier. The method includes: processing learning input variable values of at least one learning data set multiple times in a non-congruent manner by one or multiple classifier(s) trained up to an epoch E2 so that they are mapped to different output variable values; ascertaining a measure for the uncertainty of these output variable values from the deviations of these output variable values; in response to the uncertainty meeting a predefined criterion, ascertaining at least one updated learning output variable value for the learning data set from one or multiple further output variable value(s) to which the classifier or the classifiers map(s) the learning input variable values after a reset to an earlier training level with epoch E1
-
公开(公告)号:US20200342315A1
公开(公告)日:2020-10-29
申请号:US16757186
申请日:2018-10-15
Applicant: Robert Bosch GmbH
Inventor: Jan Achterhold , Jan Mathias Koehler , Tim Genewein
Abstract: A method for creating a deep neural network. The deep neural network includes a plurality of layers and connections having weights, and the weights in the created deep neural network are able to assume only predefinable discrete values from a predefinable list of discrete values. The method includes: providing at least one training input variable for the deep neural network; ascertaining a variable characterizing a cost function, which includes a first variable, which characterizes a deviation of an output variable of the deep neural network ascertained as a function of the provided training input variable relative to a predefinable setpoint output variable, and the variable characterizing the cost function further including at least one penalization variable, which characterizes a deviation of a value of one of the weights from at least one of at least two of the predefinable discrete values; training the deep neural network.
-
-
-
-
-
-
-
-
-