1. 首页
  2. 人工智能
  3. 论文/代码
  4. NAHAS:神经体系结构和硬件加速器搜索

NAHAS:神经体系结构和硬件加速器搜索

上传者: 2021-01-22 04:07:24上传 .PDF文件 611.63 KB 热度 12次

神经架构和硬件加速器一直是深度学习快速发展的两个驱动力。尽管先前的工作要么优化了使用固定硬件的神经体系结构,要么优化了使用固定神经结构的硬件,但没有人考虑共同优化它们。..

NAHAS: Neural Architecture and Hardware Accelerator Search

Neural architectures and hardware accelerators have been two driving forces for the rapid progress in deep learning. Although previous works have optimized either neural architectures given fixed hardware, or hardware given fixed neural architectures, none has considered optimizing them jointly.In this paper, we study the importance of co-designing neural architectures and hardware accelerators. To this end, we propose NAHAS, an automated hardware design paradigm that jointly searches for the best configuration for both neural architecture and accelerator. In NAHAS, accelerator hardware design is conditioned on the dynamically explored neural networks for the targeted application, instead of fixed architectures, thus providing better performance opportunities. Our experiments with an industry-standard edge accelerator show that NAHAS consistently outperforms previous platform-aware neural architecture search and state-of-the-art EfficientNet on all latency targets by 0.5% - 1% ImageNet top-1 accuracy, while reducing latency by about 20%. Joint optimization reduces the search samples by 2x and reduces the latency constraint violations from 3 violations to 1 violation per 4 searches, compared to independently optimizing the two sub spaces.

用户评论