1. 首页
  2. 人工智能
  3. 论文/代码
  4. Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets

Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets

上传者: 2021-01-24 07:26:08上传 .PDF文件 995.21 KB 热度 29次

Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets

To obtain excellent deep neural architectures, a series of techniques are carefully designed in EfficientNets. The giant formula for simultaneously enlarging the resolution, depth and width provides us a Rubik's cube for neural networks.So that we can find networks with high efficiency and excellent performance by twisting the three dimensions. This paper aims to explore the twisting rules for obtaining deep neural networks with minimum model sizes and computational costs. Different from the network enlarging, we observe that resolution and depth are more important than width for tiny networks. Therefore, the original method, i.e., the compound scaling in EfficientNet is no longer suitable. To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint. Experimental results on the ImageNet benchmark illustrate that our TinyNet performs much better than the smaller version of EfficientNets using the inversed giant formula. For instance, our TinyNet-E achieves a 59.9% Top-1 accuracy with only 24M FLOPs, which is about 1.9% higher than that of the previous best MobileNetV3 with similar computational cost. Code will be available at https://github.com/huawei-noah/ghostnet/tree/master/tinynet_pytorch, and https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/tinynet.

魔方模型:TinyNets的扭曲分辨率,深度和宽度

为了获得出色的深度神经体系结构,在EfficientNets中精心设计了一系列技术。同时扩大分辨率,深度和宽度的巨大公式为我们提供了神经网络的魔方。.. 这样我们就可以通过扭曲这三个维度来找到高效,高性能的网络。本文旨在探索以最小的模型大小和计算成本获得深层神经网络的扭曲规则。与扩大网络不同,对于小型网络,我们观察到分辨率和深度比宽度更重要。因此,原始方法(即EfficientNet中的复合缩放)不再适用。为此,我们总结了一个小公式,可以通过一系列具有FLOPs约束的EfficientNet-B0得出的较小模型来缩小神经体系结构的大小。使用ImageNet基准测试的实验结果表明,与使用反向巨型公式的较小版本的EfficientNets相比,我们的TinyNet性能要好得多。例如,我们的TinyNet-E达到了59。仅2400万个FLOP,Top-1的准确性为9%,比具有类似计算成本的最佳MobileNetV3高约1.9%。代码将在https://github.com/huawei-noah/ghostnet/tree/master/tinynet_pytorch和https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/tinynet上提供。 (阅读更多)

用户评论