1. 首页
  2. 人工智能
  3. 论文/代码
  4. 有效的神经网络设计的结构化卷积

有效的神经网络设计的结构化卷积

上传者: 2021-01-22 05:21:59上传 .PDF文件 1.42 MB 热度 10次

在这项工作中,我们通过利用卷积神经网络构件的\ textit {隐式结构}中的冗余来解决模型效率。我们通过引入复合内核结构的一般定义开始我们的分析,该定义使以高效,可缩放,求和池组件的形式执行卷积运算成为可能。..

Structured Convolutions for Efficient Neural Network Design

In this work, we tackle model efficiency by exploiting redundancy in the \textit{implicit structure} of the building blocks of convolutional neural networks. We start our analysis by introducing a general definition of Composite Kernel structures that enable the execution of convolution operations in the form of efficient, scaled, sum-pooling components.As its special case, we propose \textit{Structured Convolutions} and show that these allow decomposition of the convolution operation into a sum-pooling operation followed by a convolution with significantly lower complexity and fewer weights. We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers. Furthermore, we present a Structural Regularization loss that promotes neural network layers to leverage on this desired structure in a way that, after training, they can be decomposed with negligible performance loss. By applying our method to a wide range of CNN architectures, we demonstrate "structured" versions of the ResNets that are up to 2$\times$ smaller and a new Structured-MobileNetV2 that is more efficient while staying within an accuracy loss of 1% on ImageNet and CIFAR-10 datasets. We also show similar structured versions of EfficientNet on ImageNet and HRNet architecture for semantic segmentation on the Cityscapes dataset. Our method performs equally well or superior in terms of the complexity reduction in comparison to the existing tensor decomposition and channel pruning methods.

用户评论