1. 首页
  2. 人工智能
  3. 论文/代码
  4. 使用生成模型合成不受限制的误报对抗对象

使用生成模型合成不受限制的误报对抗对象

上传者: 2021-01-22 06:04:00上传 .PDF文件 1.16 MB 热度 14次

对抗性示例是被神经网络误分类的数据点。最初,对抗性示例仅限于对给定图像添加小扰动。..

Synthesizing Unrestricted False Positive Adversarial Objects Using Generative Models

Adversarial examples are data points misclassified by neural networks. Originally, adversarial examples were limited to adding small perturbations to a given image.Recent work introduced the generalized concept of unrestricted adversarial examples, without limits on the added perturbations. In this paper, we introduce a new category of attacks that create unrestricted adversarial examples for object detection. Our key idea is to generate adversarial objects that are unrelated to the classes identified by the target object detector. Different from previous attacks, we use off-the-shelf Generative Adversarial Networks (GAN), without requiring any further training or modification. Our method consists of searching over the latent normal space of the GAN for adversarial objects that are wrongly identified by the target object detector. We evaluate this method on the commonly used Faster R-CNN ResNet-101, Inception v2 and SSD Mobilenet v1 object detectors using logo generative iWGAN-LC and SNGAN trained on CIFAR-10. The empirical results show that the generated adversarial objects are indistinguishable from non-adversarial objects generated by the GANs, transferable between the object detectors and robust in the physical world. This is the first work to study unrestricted false positive adversarial examples for object detection.

用户评论