1. 首页
  2. 人工智能
  3. 论文/代码
  4. Towards Natural Robustness Against Adversarial Examples

Towards Natural Robustness Against Adversarial Examples

上传者: 2021-01-24 06:20:34上传 .PDF文件 369.16 KB 热度 13次

Towards Natural Robustness Against Adversarial Examples

Recent studies have shown that deep neural networks are vulnerable to adversarial examples, but most of the methods proposed to defense adversarial examples cannot solve this problem fundamentally. In this paper, we theoretically prove that there is an upper bound for neural networks with identity mappings to constrain the error caused by adversarial noises.However, in actual computations, this kind of neural network no longer holds any upper bound and is therefore susceptible to adversarial examples. Following similar procedures, we explain why adversarial examples can fool other deep neural networks with skip connections. Furthermore, we demonstrate that a new family of deep neural networks called Neural ODEs (Chen et al., 2018) holds a weaker upper bound. This weaker upper bound prevents the amount of change in the result from being too large. Thus, Neural ODEs have natural robustness against adversarial examples. We evaluate the performance of Neural ODEs compared with ResNet under three white-box adversarial attacks (FGSM, PGD, DI2-FGSM) and one black-box adversarial attack (Boundary Attack). Finally, we show that the natural robustness of Neural ODEs is even better than the robustness of neural networks that are trained with adversarial training methods, such as TRADES and YOPO.

对抗对手例子的自然鲁棒性

最近的研究表明,深度神经网络很容易受到对抗性例子的攻击,但是为防御对抗性例子而提出的大多数方法都不能从根本上解决这个问题。在本文中,我们从理论上证明了具有身份映射的神经网络有一个上限,可以约束对抗性噪声引起的误差。.. 但是,在实际计算中,这种神经网络不再具有任何上限,因此容易受到对抗性示例的影响。按照类似的过程,我们解释了对抗性示例为什么可以通过跳过连接来欺骗其他深度神经网络。此外,我们证明了称为神经ODE的新的深度神经网络家族(Chen等,2018)的上限较弱。较弱的上限可防止结果的变化量太大。因此,神经ODE对对抗示例具有天然的鲁棒性。我们评估了在三种白盒对抗攻击(FGSM,PGD,DI2-FGSM)和一种黑盒对抗攻击(边界攻击)下,神经网络ODE与ResNet相比的性能。最后, (阅读更多)

下载地址
用户评论