1. 首页
  2. 人工智能
  3. 论文/代码
  4. GANSpace: Discovering Interpretable GAN Controls

GANSpace: Discovering Interpretable GAN Controls

上传者: 2021-01-24 04:26:14上传 .PDF文件 21.16 MB 热度 27次

GANSpace: Discovering Interpretable GAN Controls

This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day. We identify important latent directions based on Principal Components Analysis (PCA) applied either in latent space or feature space.Then, we show that a large number of interpretable controls can be defined by layer-wise perturbation along the principal directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. We show results on different GANs trained on various datasets, and demonstrate good qualitative matches to edit directions found through earlier supervised approaches.

GANSpace:发现可解释的GAN控件

本文介绍了一种简单的技术,用于分析生成对抗网络(GAN)并创建用于图像合成的可解释控件,例如视点变化,老化,光照和一天中的时间。我们基于在潜在空间或特征空间中应用的主成分分析(PCA)确定重要的潜在方向。.. 然后,我们表明可以通过沿主方向的逐层扰动来定义大量可解释的控件。此外,我们证明了BigGAN可以通过类似于StyleGAN的分层输入进行控制。我们显示了在各种数据集上训练的不同GAN上的结果,并展示了良好的定性匹配以编辑通过早期监督方法发现的方向。 (阅读更多)

用户评论