Metropolis-Hastings view on variational inference and adversarial training
Metropolis-Hastings view on variational inference and adversarial training
A significant part of MCMC methods can be considered as the Metropolis-Hastings (MH) algorithm with different proposal distributions. From this point of view, the problem of constructing a sampler can be reduced to the question - how to choose a proposal for the MH algorithm?To address this question, we propose to learn an independent sampler that maximizes the acceptance rate of the MH algorithm, which, as we demonstrate, is highly related to the conventional variational inference. For Bayesian inference, the proposed method compares favorably against alternatives to sample from the posterior distribution. Under the same approach, we step beyond the scope of classical MCMC methods and deduce the Generative Adversarial Networks (GANs) framework from scratch, treating the generator as the proposal and the discriminator as the acceptance test. On real-world datasets, we improve Frechet Inception Distance and Inception Score, using different GANs as a proposal distribution for the MH algorithm. In particular, we demonstrate improvements of recently proposed BigGAN model on ImageNet.
Metropolis-Hastings对变异推理和对抗训练的看法
可以将MCMC方法的重要部分视为具有不同提议分布的Metropolis-Hastings(MH)算法。从这个角度来看,构建采样器的问题可以简化为:如何为MH算法选择方案?.. 为了解决这个问题,我们建议学习一个独立的采样器,以最大程度地提高MH算法的接受率,正如我们所证明的那样,该算法与常规变分推理高度相关。对于贝叶斯推断,所提出的方法与后验分布样本的替代方法相比具有优势。在相同的方法下,我们超越了经典的MCMC方法的范围,从头推论了生成对抗网络(GANs)框架,将生成器视为提议,将鉴别器视为接受测试。在现实世界的数据集上,我们使用不同的GAN作为MH算法的提案分配,从而改善了Frechet起始距离和起始分数。特别是,我们在ImageNet上展示了最近提出的BigGAN模型的改进。 (阅读更多)