Reducing the Computational Cost of Deep Generative Models with Binary Neural Net
Reducing the Computational Cost of Deep Generative Models with Binary Neural Networks
Deep generative models provide a powerful set of tools to understand real-world data. But as these models improve, they increase in size and complexity, so their computational cost in memory and execution time grows.Using binary weights in neural networks is one method which has shown promise in reducing this cost. However, whether binary neural networks can be used in generative models is an open problem. In this work we show, for the first time, that we can successfully train generative models which utilize binary neural networks. This reduces the computational cost of the models massively. We develop a new class of binary weight normalization, and provide insights for architecture designs of these binarized generative models. We demonstrate that two state-of-the-art deep generative models, the ResNet VAE and Flow++ models, can be binarized effectively using these techniques. We train binary models that achieve loss values close to those of the regular models but are 90%-94% smaller in size, and also allow significant speed-ups in execution time.
使用二元神经网络降低深度生成模型的计算成本
深度生成模型提供了一组功能强大的工具来理解现实世界的数据。但是随着这些模型的改进,它们的大小和复杂性都会增加,因此它们在内存和执行时间方面的计算成本也会增加。.. 在神经网络中使用二进制权重是一种有望降低此成本的方法。但是,是否可以在生成模型中使用二进制神经网络是一个悬而未决的问题。在这项工作中,我们首次展示了我们可以成功地训练利用二进制神经网络的生成模型。这大大降低了模型的计算成本。我们开发了一类新的二进制权重归一化方法,并为这些二值化生成模型的体系结构设计提供了见识。我们证明,使用这些技术可以有效地将两个最新的深度生成模型ResNet VAE和Flow ++模型二值化。我们训练的二元模型可以实现与常规模型接近的损失值,但是其大小要缩小90%-94%,并且还可以大大缩短执行时间。 (阅读更多)