具有重要性采样的生成式对抗神经体系结构搜索
尽管在深度学习应用中神经体系结构搜索(NAS)算法在经验上取得了成功,但NAS方案的最优性,可再现性和成本仍然难以评估。搜索空间或实验程序的采用因素进一步影响了特定搜索策略之间的公平比较。..
Generative Adversarial Neural Architecture Search with Importance Sampling
Despite the empirical success of neural architecture search (NAS) algorithms in deep learning applications, the optimality, reproducibility and cost of NAS schemes remain hard to be assessed. The factor of search space or experimental procedure adopted has further affected a fair comparison between specific search strategies.In this paper, we revisit the search strategies in NAS and propose Generative Adversarial NAS (GA-NAS). Motivated by the fact that the search space grows exponentially as a function of the architecture size, GA-NAS is theoretically inspired by importance sampling for rare event simulation, and iteratively refits a generator to previously discovered top architectures, thus increasingly focusing on important parts of the search space. GA-NAS adopts an efficient adversarial learning approach, where the generator is not trained based on a large number of observations on architecture performance, but based on the relative prediction made by a discriminator, thus significantly reducing the number of evaluations required. Extensive experiments show that GA-NAS beats the best published results in comparison to a range of state-of-the-art search algorithms proposed for NAS on public benchmarks including NAS-Bench-101, NAS-Bench-201, and NAS-Bench-301. We further show that GA-NAS can handle ad-hoc search objectives and search spaces. Specifically, on the EfficientNet macro search space, our algorithm is able to find a new architecture with higher ImageNet accuracy and a lower number of parameters compared to EfficientNet-B0.