1. 首页
  2. 人工智能
  3. 论文/代码
  4. TinyGAN: Distilling BigGAN for Conditional Image Generation

TinyGAN: Distilling BigGAN for Conditional Image Generation

上传者: 2021-01-24 04:04:06上传 .PDF文件 15.66 MB 热度 17次

TinyGAN: Distilling BigGAN for Conditional Image Generation

Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious for their training instability, especially on large-scale, complex datasets.While the recent work of BigGAN has significantly improved the quality of image generation on ImageNet, it requires a huge model, making it hard to deploy on resource-constrained devices. To reduce the model size, we propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process. Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having $16\times$ fewer parameters.

TinyGAN:提炼BigGAN以产生条件图像

生成对抗网络(GAN)已成为生成图像建模的强大方法。但是,GAN的训练不稳定性而臭名昭著,尤其是在大规模,复杂的数据集上。.. 尽管BigGAN的最新工作显着提高了ImageNet上图像生成的质量,但它需要一个庞大的模型,这使其很难在资源受限的设备上进行部署。为了减少模型的大小,我们提出了一种用于压缩GAN的黑盒知识蒸馏框架,该框架着重介绍了稳定有效的训练过程。以BigGAN为老师网络,我们设法训练一个小得多的学生网络来模仿其功能,并在发电机具有 16× 参数更少。 (阅读更多)

用户评论