1. 首页
  2. 人工智能
  3. 论文/代码
  4. 通过知识蒸馏规避自动扩增的异常值

通过知识蒸馏规避自动扩增的异常值

上传者: 2021-01-22 15:05:21上传 .PDF文件 5.77 MB 热度 53次

AutoAugment是一种功能强大的算法,可以提高许多视觉任务的准确性,但是它对操作员空间以及超参数敏感,并且设置不当可能会降低网络优化的效率。本文深入研究了工作机制,并揭示了自动增强可能会从训练图像中删除部分歧视性信息,因此坚持使用真实标签已不再是最佳选择。..

Circumventing Outliers of AutoAugment with Knowledge Distillation

AutoAugment has been a powerful algorithm that improves the accuracy of many vision tasks, yet it is sensitive to the operator space as well as hyper-parameters, and an improper setting may degenerate network optimization. This paper delves deep into the working mechanism, and reveals that AutoAugment may remove part of discriminative information from the training image and so insisting on the ground-truth label is no longer the best option.To relieve the inaccuracy of supervision, we make use of knowledge distillation that refers to the output of a teacher model to guide network training. Experiments are performed in standard image classification benchmarks, and demonstrate the effectiveness of our approach in suppressing noise of data augmentation and stabilizing training. Upon the cooperation of knowledge distillation and AutoAugment, we claim the new state-of-the-art on ImageNet classification with a top-1 accuracy of 85.8%.

下载地址
用户评论