Protection of neural networks by obfuscation of neural network architecture
摘要:
Aspects of the present disclosure involve implementations that may be used to protect neural network models against adversarial attacks by obfuscating neural network operations and architecture. Obfuscation techniques include obfuscating weights and biases of neural network nodes, obfuscating activation functions used by neural networks, as well as obfuscating neural network architecture by introducing dummy operations, dummy nodes, and dummy layers into the neural networks.
信息查询
0/0