1. 首页
  2. 人工智能
  3. 论文/代码
  4. Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behav

Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behav

上传者: 2021-01-24 04:39:38上传 .PDF文件 2.43 MB 热度 17次

Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures

This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called \textit{concentrated} random vectors. Further exploiting the fact that Gram matrices, of the type $G = X^T X$ with $X=[x_1,\ldots,x_n]\in \mathbb{R}^{p\times n}$ and $x_i$ independent concentrated random vectors from a mixture model, behave asymptotically (as $n,p\to \infty$) as if the $x_i$ were drawn from a Gaussian mixture, suggests that DL representations of GAN-data can be fully described by their first two statistical moments for a wide range of standard classifiers.Our theoretical findings are validated by generating images with the BigGAN model and across different popular deep representation networks.

随机矩阵理论证明GAN数据的深度学习表示表现为高斯混合

本文表明,生成对抗网络(GAN)产生的数据的深度学习(DL)表示是随机向量,属于所谓的\ textit {concentrated}随机向量。进一步利用以下事实: G=XŤX 与 X=[X1个,…,Xñ]∈[Rp×ñ 和 X一世 来自混合模型的独立集中随机向量,渐近地表现(如 ñ,p→∞ ),好像 X一世 样本是从高斯混合中得出的,表明GAN数据的DL表示可以通过它们在广泛的标准分类器中的前两个统计矩来完全描述。.. 我们的理论发现可通过使用BigGAN模型并在不同的流行深度表示网络中生成图像来验证。 (阅读更多)

下载地址
用户评论