Over-parametrized neural networks as under-determined linear systems
Over-parametrized neural networks as under-determined linear systems
We draw connections between simple neural networks and under-determined linear systems to comprehensively explore several interesting theoretical questions in the study of neural networks. First, we emphatically show that it is unsurprising such networks can achieve zero training loss.More specifically, we provide lower bounds on the width of a single hidden layer neural network such that only training the last linear layer suffices to reach zero training loss. Our lower bounds grow more slowly with data set size than existing work that trains the hidden layer weights. Second, we show that kernels typically associated with the ReLU activation function have fundamental flaws -- there are simple data sets where it is impossible for widely studied bias-free models to achieve zero training loss irrespective of how the parameters are chosen or trained. Lastly, our analysis of gradient descent clearly illustrates how spectral properties of certain matrices impact both the early iteration and long-term training behavior. We propose new activation functions that avoid the pitfalls of ReLU in that they admit zero training loss solutions for any set of distinct data points and experimentally exhibit favorable spectral properties.
超参数化神经网络是不确定线性系统
我们画出简单神经网络与欠定线性系统之间的联系,以全面探索神经网络研究中的几个有趣的理论问题。首先,我们着重表明,此类网络可以实现零培训损失,这不足为奇。.. 更具体地说,我们提供了单个隐藏层神经网络宽度的下限,这样,仅训练最后一个线性层就足以达到零训练损失。与训练隐层权重的现有工作相比,数据集大小的下限增长得更慢。其次,我们表明通常与ReLU激活函数相关的内核具有根本缺陷-存在简单的数据集,无论参数的选择或训练方式如何,广泛研究的无偏差模型都不可能实现零训练损失。最后,我们对梯度下降的分析清楚地说明了某些矩阵的光谱特性如何影响早期迭代和长期训练行为。 (阅读更多)