1. 首页
  2. 人工智能
  3. 论文/代码
  4. NeuralScale:资源受限的深度神经网络的神经元有效缩放。

NeuralScale:资源受限的深度神经网络的神经元有效缩放。

上传者: 2021-01-22 05:45:19上传 .PDF文件 1.34 MB 热度 8次

在深度神经网络的设计过程中确定神经元的数量以最大化性能是不直观的。在这项工作中,我们尝试搜索最大化准确性的固定网络体系结构的神经元(过滤器)配置。..

NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep Neural Networks

Deciding the amount of neurons during the design of a deep neural network to maximize performance is not intuitive. In this work, we attempt to search for the neuron (filter) configuration of a fixed network architecture that maximizes accuracy.Using iterative pruning methods as a proxy, we parameterize the change of the neuron (filter) number of each layer with respect to the change in parameters, allowing us to efficiently scale an architecture across arbitrary sizes. We also introduce architecture descent which iteratively refines the parameterized function used for model scaling. The combination of both proposed methods is coined as NeuralScale. To prove the efficiency of NeuralScale in terms of parameters, we show empirical simulations on VGG11, MobileNetV2 and ResNet18 using CIFAR10, CIFAR100 and TinyImageNet as benchmark datasets. Our results show an increase in accuracy of 3.04%, 8.56% and 3.41% for VGG11, MobileNetV2 and ResNet18 on CIFAR10, CIFAR100 and TinyImageNet respectively under a parameter-constrained setting (output neurons (filters) of default configuration with scaling factor of 0.25).

下载地址
用户评论