-
公开(公告)号:US11321456B2
公开(公告)日:2022-05-03
申请号:US16414068
申请日:2019-05-16
Applicant: NXP B.V.
Inventor: Gerardus Antonius Franciscus Derks , Brian Ermans , Wilhelmus Petrus Adrianus Johannus Michiels , Christine van Vredendaal
Abstract: A method for protecting a machine learning (ML) model is provided. During inference operation of the ML model, a plurality of input samples is provided to the ML model. A distribution of a plurality of output predictions from a predetermined node in the ML model is measured. If the distribution of the plurality of output predictions indicates correct output category prediction with low confidence, then the machine learning model is slowed to reduce a prediction rate of subsequent output predictions. If the distribution of the plurality of categories indicates correct output category prediction with a high confidence, then the machine learning model is not slowed to reduce the prediction rate of subsequent output predictions of the machine learning model. A moving average of the distribution may be used to determine the speed reduction. This makes a cloning attack on the ML model take longer with minimal impact to a legitimate user.
-
公开(公告)号:US20220067503A1
公开(公告)日:2022-03-03
申请号:US17002978
申请日:2020-08-26
Applicant: NXP B.V.
Inventor: Brian Ermans , Gerardus Antonius Franciscus Derks , Wilhelmus Petrus Adrianus Johannus Michiels , Christine van Vredendaal
Abstract: A method is provided for analyzing a similarly between classes of a plurality of classes in a trained machine learning model (ML). The method includes collecting weights of connections from each node of a first predetermined layer of a neural network (NN) to each node of a second predetermined layer of the NN to which the nodes of the first predetermined layer are connected. The collected weights are used to calculate distances from each node of the first predetermined layer to nodes of the second predetermined layer to which the first predetermined layer nodes are connected. The distances are compared to determine which classes the NN determines are similar. Two or more of the similar classes may then be analyzed using any of a variety of techniques to determine why the two or more classes of the NN were determined to be similar.
-
公开(公告)号:US20210406693A1
公开(公告)日:2021-12-30
申请号:US16912052
申请日:2020-06-25
Applicant: NXP B.V.
Inventor: Christine VAN VREDENDAAL , Wilhelmus Petrus Adrianus Johannus Michiels , Gerardus Antonius Franciscus Derks , Brian Ermans
Abstract: A method is described for analyzing data samples of a machine learning (ML) model to determine why the ML model classified a sample like it did. Two samples are chosen for analysis. The two samples may be nearest neighbors. Samples classified as nearest neighbors are typically samples that are more similar with respect to a predetermined criterion than other samples of a set of samples. In the method, a first set of features of a first sample and a second set of features of a second sample are collected. A set of overlapping features of the first and second sets of features is determined. Then, the set of overlapping features is analyzed using a predetermined visualization technique to determine why the ML model determined the first sample to be similar to the second sample.
-
公开(公告)号:US10769310B2
公开(公告)日:2020-09-08
申请号:US16040992
申请日:2018-07-20
Applicant: NXP B.V.
Abstract: A method for protecting a machine learning model from copying is provided. The method includes providing a neural network architecture having an input layer, a plurality of hidden layers, and an output layer. Each of the plurality of hidden layers has a plurality of nodes. A neural network application is provided to run on the neural network architecture. First and second types of activation functions are provided. Activation functions including a combination of the first and second types of activation functions are provided to the plurality of nodes of the plurality of hidden layers. The neural network application is trained with a training set to generate a machine learning model. Using the combination of first and second types of activation functions makes it more difficult for an attacker to copy the machine learning model. Also, the neural network application may be implemented in hardware to prevent easy illegitimate upgrading of the neural network application.
-
公开(公告)号:US11961314B2
公开(公告)日:2024-04-16
申请号:US17176583
申请日:2021-02-16
Applicant: NXP B.V.
Inventor: Gerardus Antonius Franciscus Derks , Wilhelmus Petrus Adrianus Johannus Michiels , Brian Ermans , Frederik Dirk Schalij
IPC: G06V10/20 , G06F18/213 , G06N5/04 , G06N20/00 , G06V20/64
CPC classification number: G06V20/64 , G06F18/213 , G06N5/04 , G06N20/00 , G06V10/255
Abstract: A method is described for analyzing an output of an object detector for a selected object of interest in an image. The object of interest in a first image is selected. A user of the object detector draws a bounding box around the object of interest. A first inference operation is run on the first image using the object detector, and in response, the object detect provides a plurality of proposals. A non-max suppression (NMS) algorithm is run on the plurality of proposals, including the proposal having the object of interest. A classifier and bounding box regressor are run on each proposal of the plurality of proposals and results are outputted. The outputted results are then analyzed. The method can provide insight into why an object detector returns the results that it does.
-
公开(公告)号:US20230040470A1
公开(公告)日:2023-02-09
申请号:US17444682
申请日:2021-08-09
Applicant: NXP B.V.
Inventor: Brian Ermans , Peter Doliwa , Gerardus Antonius Franciscus Derks , Wilhelmus Petrus Adrianus Johannus Michiels , Frederik Dirk Schalij
Abstract: A method is provided for generating a visualization for explaining a behavior of a machine learning (ML) model. In the method, an image is input to the ML model for an inference operation. The input image has an increased resolution compared to an image resolution the ML model was intended to receive as an input. A resolution of a plurality of resolution-independent convolutional layers of the neural network are adjusted because of the increased resolution of the input image. A resolution-independent convolutional layer of the neural network is selected. The selected resolution-independent convolutional layer is used to generate a plurality of activation maps. The plurality of activation maps is used in a visualization method to show what features of the image were important for the ML model to derive an inference conclusion. The method may be implemented in a computer program having instructions executable by a processor.
-
公开(公告)号:US11501108B2
公开(公告)日:2022-11-15
申请号:US16043909
申请日:2018-07-24
Applicant: NXP B.V.
Inventor: Wilhelmus Petrus Adrianus Johannus Michiels , Gerardus Antonius Franciscus Derks , Marc Vauclair , Nikita Veshchikov
Abstract: Various embodiments relate to a method of producing a machine learning model with a fingerprint that maps an input value to an output label, including: selecting a set of extra input values, wherein the set of extra input values does not intersect with a set of training labeled input values for the machine learning model; selecting a first set of artificially encoded output label values corresponding to each of the extra input values in the set of extra input values, wherein the first set of artificially encoded output label values are selected to indicate the fingerprint of a first machine learning model; and training the machine learning model using a combination of the extra input values with associated first set of artificially encoded output values and the set of training labeled input values to produce the first learning model with the fingerprint.
-
公开(公告)号:US11468291B2
公开(公告)日:2022-10-11
申请号:US16145287
申请日:2018-09-28
Applicant: NXP B.V.
Abstract: A method is provided for protecting a machine learning ensemble. In the method, a plurality of machine learning models is combined to form a machine learning ensemble. A plurality of data elements for training the machine learning ensemble is provided. The machine learning ensemble is trained using the plurality of data elements to produce a trained machine learning ensemble. During an inference operating phase, an input is received by the machine learning ensemble. A piecewise function is used to pseudo-randomly choose one of the plurality of machine learning models to provide an output in response to the input. The use of a piecewise function hides which machine learning model provided the output, making the machine learning ensemble more difficult to copy.
-
-
-
-
-
-
-