1. 首页
  2. 人工智能
  3. 论文/代码
  4. 用于神经网络量化的镜像下降视图

用于神经网络量化的镜像下降视图

上传者: 2021-01-23 05:22:27上传 .PDF文件 1.04 MB 热度 4次

由于内存和时间的减少,对于资源受限的设备,在保持性能的同时对大型神经网络(NN)进行量化非常合乎需要。通常将其公式化为约束优化问题,并通过梯度下降的修改版本进行优化。..

Mirror Descent View for Neural Network Quantization

Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity. It is usually formulated as a constrained optimization problem and optimized via a modified version of gradient descent.In this work, by interpreting the continuous parameters (unconstrained) as the dual of the quantized ones, we introduce a Mirror Descent (MD) framework for NN quantization. Specifically, we provide conditions on the projections (i.e., mapping from continuous to quantized ones) which would enable us to derive valid mirror maps and in turn the respective MD updates. Furthermore, we present a numerically stable implementation of MD that requires storing an additional set of auxiliary variables (unconstrained), and show that it is strikingly analogous to the Straight Through Estimator (STE) based method which is typically viewed as a "trick" to avoid vanishing gradients issue. Our experiments on CIFAR-10/100, TinyImageNet, and ImageNet classification datasets with VGG-16, ResNet-18, and MobileNetV2 architectures show that our MD variants obtain quantized networks with state-of-the-art performance.

用户评论