1. 首页
  2. 人工智能
  3. 论文/代码
  4. SoFAr: Shortcut-based Fractal Architectures for Binary Convolutional Neural Netw

SoFAr: Shortcut-based Fractal Architectures for Binary Convolutional Neural Netw

上传者: 2021-01-24 08:45:45上传 .PDF文件 327.87 KB 热度 14次

SoFAr: Shortcut-based Fractal Architectures for Binary Convolutional Neural Networks

Binary Convolutional Neural Networks (BCNNs) can significantly improve the efficiency of Deep Convolutional Neural Networks (DCNNs) for their deployment on resource-constrained platforms, such as mobile and embedded systems. However, the accuracy degradation of BCNNs is still considerable compared with their full precision counterpart, impeding their practical deployment.Because of the inevitable binarization error in the forward propagation and gradient mismatch problem in the backward propagation, it is nontrivial to train BCNNs to achieve satisfactory accuracy. To ease the difficulty of training, the shortcut-based BCNNs, such as residual connection-based Bi-real ResNet and dense connection-based BinaryDenseNet, introduce additional shortcuts in addition to the shortcuts already present in their full precision counterparts. Furthermore, fractal architectures have been also been used to improve the training process of full-precision DCNNs since the fractal structure triggers effects akin to deep supervision and lateral student-teacher information flow. Inspired by the shortcuts and fractal architectures, we propose two Shortcut-based Fractal Architectures (SoFAr) specifically designed for BCNNs: 1. residual connection-based fractal architectures for binary ResNet, and 2. dense connection-based fractal architectures for binary DenseNet. Our proposed SoFAr combines the adoption of shortcuts and the fractal architectures in one unified model, which is helpful in the training of BCNNs. Results show that our proposed SoFAr achieves better accuracy compared with shortcut-based BCNNs. Specifically, the Top-1 accuracy of our proposed RF-c4d8 ResNet37(41) and DRF-c2d2 DenseNet51(53) on ImageNet outperforms Bi-real ResNet18(64) and BinaryDenseNet51(32) by 3.29% and 1.41%, respectively, with the same computational complexity overhead.

SoFAr:用于二进制卷积神经网络的基于快捷方式的分形体系结构

二进制卷积神经网络(BCNN)可以大大提高深度卷积神经网络(DCNN)在资源受限平台(例如移动和嵌入式系统)上的部署效率。但是,与全精度神经网络相比,BCNN的精度下降仍然很大,这阻碍了它们的实际部署。.. 由于前向传播中不可避免的二值化误差和后向传播中的梯度失配问题,训练BCNN以获得令人满意的精度并非易事。为了减轻训练的难度,基于快捷方式的BCNN(例如基于残差连接的Bi-Real ResNet和基于密集连接的BinaryDenseNet)除了在其全精度副本中已经存在的快捷方式之外,还引入了其他快捷方式。此外,由于分形结构会触发类似于深度监督和横向师生信息流的效果,因此分形体系结构也已用于改善精确DCNN的训练过程。受捷径和分形架构的启发,我们提出了两种专为BCNN设计的基于快捷方式的分形架构(SoFAr):1。用于二进制ResNet的基于残差连接的分形体系结构;以及用于二进制DenseNet的基于密集连接的分形体系结构。我们提出的SoFAr在一个统一的模型中结合了捷径和分形体系结构的采用,这对BCNN的训练很有帮助。结果表明,与基于快捷方式的BCNN相比,我们提出的SoFAr具有更好的准确性。具体来说,在ImageNet上我们提出的RF-c4d8 ResNet37(41)和DRF-c2d2 DenseNet51(53)的前1位准确性分别比Bi-real ResNet18(64)和BinaryDenseNet51(32)高出3.29%和1.41%。相同的计算复杂度开销。我们提出的SoFAr在一个统一的模型中结合了捷径和分形体系结构的采用,这对BCNN的训练很有帮助。结果表明,与基于快捷方式的BCNN相比,我们提出的SoFAr具有更好的准确性。具体来说,在ImageNet上我们提出的RF-c4d8 ResNet37(41)和DRF-c2d2 DenseNet51(53)的前1位准确性分别比Bi-real ResNet18(64)和BinaryDenseNet51(32)高出3.29%和1.41%。相同的计算复杂度开销。我们提出的SoFAr在一个统一的模型中结合了捷径和分形体系结构的采用,这对BCNN的训练很有帮助。结果表明,与基于快捷方式的BCNN相比,我们提出的SoFAr具有更好的准确性。具体来说,在ImageNet上我们提出的RF-c4d8 ResNet37(41)和DRF-c2d2 DenseNet51(53)的前1位准确性分别比Bi-real ResNet18(64)和BinaryDenseNet51(32)高出3.29%和1.41%。相同的计算复杂度开销。 (阅读更多)

用户评论