1. 首页
  2. 人工智能
  3. 论文/代码
  4. Inhibition-augmented ConvNets

Inhibition-augmented ConvNets

上传者: 2021-01-24 05:46:38上传 .PDF文件 418.23 KB 热度 16次

Inhibition-augmented ConvNets

Convolutional Networks (ConvNets) suffer from insufficient robustness to common corruptions and perturbations of the input, unseen during training. We address this problem by including a form of response inhibition in the processing of the early layers of existing ConvNets and assess the resulting representation power and generalization on corrupted inputs.The considered inhibition mechanism consists of a non-linear computation that is inspired by the push-pull inhibition exhibited by some neurons in the visual system of the brain. In practice, each convolutional filter (push) in the early layers of conventional ConvNets is coupled with another filter (pull), which responds to the preferred pattern of the corresponding push filter but of opposite contrast. The rectified responses of the push and pull filter pairs are then combined by a linear function. This results in a representation that suppresses responses to noisy patterns (e.g. texture, Gaussian, shot, distortion, and others) and accentuates responses to preferred patterns. We deploy the layer into existing architectures, (Wide-)ResNet and DenseNet, and propose new residual and dense push-pull layers. We demonstrate that ConvNets that embed this inhibition into the initial layers are able to learn representations that are robust to several types of input corruptions. We validated the approach on the ImageNet and CIFAR data sets and their corrupted and perturbed versions, ImageNet-C/P and CIFAR-C/P. It turns out that the push-pull inhibition enhances the overall robustness and generalization of ConvNets to corrupted and perturbed input data. Besides the improvement in generalization, notable is the fact that ConvNets with push-pull inhibition have a sparser representation than conventional ones without inhibition. The code and trained models will be made available.

抑制增强卷积网络

卷积网络(ConvNets)的鲁棒性不足,无法应对常见的输入破坏和扰动,这在训练中是看不到的。我们通过在现有ConvNet的早期层处理中包括一种响应抑制形式来解决此问题,并评估由此产生的表示能力和对损坏的输入的概括。.. 所考虑的抑制机制由非线性计算组成,该非线性计算受到大脑视觉系统中某些神经元表现出的推挽抑制的启发。实际上,常规ConvNets早期层中的每个卷积滤波器(推)都与另一个滤波器(拉)耦合,后者响应相应推式滤波器的首选模式,但对比度相反。推挽滤波器对的整流响应随后通过线性函数合并。这导致抑制对噪声模式(例如纹理,高斯,镜头,失真等)的响应并加重对首选模式的响应的表示。我们将该层部署到(Wide-)ResNet和DenseNet现有体系结构中,并提出新的残差层和密集的推挽层。我们证明了将这种抑制嵌入到初始层中的ConvNets能够学习对几种类型的输入损坏具有鲁棒性的表示形式。我们对ImageNet和CIFAR数据集及其损坏和受干扰的版本ImageNet-C / P和CIFAR-C / P验证了该方法。事实证明,推挽抑制功能增强了ConvNet对损坏和扰动的输入数据的整体鲁棒性和通用性。除了泛化方面的改进外,值得注意的是,具有推挽抑制作用的ConvNets与没有抑制作用的传统ConvNets相比,具有稀疏表示。代码和训练有素的模型将可用。我们对ImageNet和CIFAR数据集及其损坏和受干扰的版本ImageNet-C / P和CIFAR-C / P验证了该方法。事实证明,推挽抑制功能增强了ConvNet对损坏和扰动的输入数据的整体鲁棒性和通用性。除了泛化方面的改进外,值得注意的是,具有推挽抑制作用的ConvNets与没有抑制作用的传统ConvNets相比,具有稀疏表示。代码和训练有素的模型将可用。我们对ImageNet和CIFAR数据集及其损坏和受干扰的版本ImageNet-C / P和CIFAR-C / P验证了该方法。事实证明,推挽抑制功能增强了ConvNet对损坏和扰动的输入数据的整体鲁棒性和通用性。除了泛化方面的改进外,值得注意的是,具有推挽抑制作用的ConvNets与没有抑制作用的传统ConvNets相比,具有稀疏表示。代码和训练有素的模型将可用。值得注意的是,具有推挽抑制作用的ConvNets比没有抑制作用的传统ConvNets具有稀疏表示。代码和训练有素的模型将可用。值得注意的是,具有推挽抑制作用的ConvNets比没有抑制作用的传统ConvNets具有稀疏表示。代码和训练有素的模型将可用。 (阅读更多)

用户评论