Deep Neural Network Hardening Framework
    3.
    发明申请

    公开(公告)号:US20190188562A1

    公开(公告)日:2019-06-20

    申请号:US15844442

    申请日:2017-12-15

    IPC分类号: G06N3/08 G06N3/04 H04L29/06

    CPC分类号: G06N3/08 G06N3/04 H04L63/1441

    摘要: Mechanisms are provided to implement a hardened neural network framework. A data processing system is configured to implement a hardened neural network engine that operates on a neural network to harden the neural network against evasion attacks and generates a hardened neural network. The hardened neural network engine generates a reference training data set based on an original training data set. The neural network processes the original training data set and the reference training data set to generate first and second output data sets. The hardened neural network engine calculates a modified loss function of the neural network, where the modified loss function is a combination of an original loss function associated with the neural network and a function of the first and second output data sets. The hardened neural network engine trains the neural network based on the modified loss function to generate the hardened neural network.

    Protecting cognitive systems from model stealing attacks

    公开(公告)号:US11853436B2

    公开(公告)日:2023-12-26

    申请号:US17231369

    申请日:2021-04-15

    摘要: Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.

    Protecting Cognitive Systems from Model Stealing Attacks

    公开(公告)号:US20210303703A1

    公开(公告)日:2021-09-30

    申请号:US17231369

    申请日:2021-04-15

    摘要: Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.

    System for measuring information leakage of deep learning models

    公开(公告)号:US11886989B2

    公开(公告)日:2024-01-30

    申请号:US16125983

    申请日:2018-09-10

    IPC分类号: G06N3/08 G06N3/045

    CPC分类号: G06N3/08 G06N3/045

    摘要: Using a deep learning inference system, respective similarities are measured for each of a set of intermediate representations to input information used as an input to the deep learning inference system. The deep learning inference system includes multiple layers, each layer producing one or more associated intermediate representations. Selection is made of a subset of the set of intermediate representations that are most similar to the input information. Using the selected subset of intermediate representations, a partitioning point is determined in the multiple layers used to partition the multiple layers into two partitions defined so that information leakage for the two partitions will meet a privacy parameter when a first of the two partitions is prevented from leaking information. The partitioning point is output for use in partitioning the multiple layers of the deep learning inference system into the two partitions.

    Privacy-preserving identity asset exchange

    公开(公告)号:US10944560B2

    公开(公告)日:2021-03-09

    申请号:US16053189

    申请日:2018-08-02

    IPC分类号: H04L9/32 H04L29/06 H04L9/30

    摘要: A processor-implemented method facilitates identity exchange in a decentralized setting. A first system performs a pseudonymous handshake with a second system that has created an identity asset that identifies an entity. The second system has transmitted the identity asset to a third system, which is a set of peer computers that support a blockchain that securely maintains a ledger of the identity asset. The first system transmits a set of pseudonyms to the third system, where the set of pseudonyms comprises a first pseudonym that identifies the first system, a second pseudonym that identifies a user of the second system, and a third pseudonym that identifies the third system. The first system receives the identity asset from the third system, which securely ensures a validity of the identity asset as identified by the first pseudonym, the second pseudonym, and the third pseudonym.