1. 首页
  2. 人工智能
  3. 论文/代码
  4. Model-Free Energy Distance for Pruning DNNs

Model-Free Energy Distance for Pruning DNNs

上传者: 2021-01-24 05:48:50上传 .PDF文件 760.88 KB 热度 26次

Model-Free Energy Distance for Pruning DNNs

We propose a novel method for compressing Deep Neural Networks (DNNs) with competitive performance to state-of-the-art methods. We measure a new model-free information between the feature maps and the output of the network.Model-freeness of our information measure guarantees that no parametric assumptions on the feature distribution are required. The new model-free information is subsequently used to prune a collection of redundant layers in the networks with skip-connections. Numerical experiments on CIFAR-10/100, SVHN, Tiny ImageNet, and ImageNet data sets show the efficacy of the proposed approach in compressing deep models. For instance, in classifying CIFAR-10 images our method achieves respectively 64.50% and 60.31% reduction in the number of parameters and FLOPs for a full DenseNet model with 0.77 million parameters while dropping only 1% in the test accuracy. Our code is available at https://github.com/suuyawu/PEDmodelcompression

修剪DNN的无模型能量距离

我们提出了一种新颖的压缩深度神经网络(DNN)的方法,该方法具有与最新方法相当的性能。我们在特征图和网络输出之间测量新的无模型信息。.. 我们的信息测度不依赖模型,因此无需对特征分布进行参数假设。新的无模型信息随后用于通过跳过连接来修剪网络中冗余层的集合。在CIFAR-10 / 100,SVHN,Tiny ImageNet和ImageNet数据集上的数值实验表明,该方法在压缩深层模型方面是有效的。例如,在对CIFAR-10图像进行分类时,对于具有77万个参数的完整DenseNet模型,我们的方法分别减少了64.50%和60.31%的参数和FLOP数量,而测试精度仅下降了1%。我们的代码可从https://github.com/suuyawu/PEDmodelcompression获得

用户评论