AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network
AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference
Hybrid Privacy-Preserving Neural Network (HPPNN) implementing linear layers by Homomorphic Encryption (HE) and nonlinear layers by Garbled Circuit (GC) is one of the most promising secure solutions to emerging Machine Learning as a Service (MLaaS). Unfortunately, a HPPNN suffers from long inference latency, e.g., $\sim100$ seconds per image, which makes MLaaS unsatisfactory.Because HE-based linear layers of a HPPNN cost $93\%$ inference latency, it is critical to select a set of HE parameters to minimize computational overhead of linear layers. Prior HPPNNs over-pessimistically select huge HE parameters to maintain large noise budgets, since they use the same set of HE parameters for an entire network and ignore the error tolerance capability of a network. In this paper, for fast and accurate secure neural network inference, we propose an automated layer-wise parameter selector, AutoPrivacy, that leverages deep reinforcement learning to automatically determine a set of HE parameters for each linear layer in a HPPNN. The learning-based HE parameter selection policy outperforms conventional rule-based HE parameter selection policy. Compared to prior HPPNNs, AutoPrivacy-optimized HPPNNs reduce inference latency by $53\%\sim70\%$ with negligible loss of accuracy.
AutoPrivacy:用于安全神经网络推理的自动分层参数选择
混合隐私保护神经网络(HPPNN)通过同态加密(HE)实现线性层,而乱码(GC)实现非线性层,是新兴的机器学习即服务(MLaaS)最具前景的安全解决方案之一。不幸的是,HPPNN具有较长的推理延迟,例如 〜100 每张图像的秒数,这使得MLaaS不能令人满意。.. 因为HPPNN的基于HE的线性层的成本 93% 推理延迟,选择一组HE参数以最小化线性层的计算开销至关重要。先前的HPPNN过于悲观地选择了巨大的HE参数以维持大量的噪声预算,因为它们对整个网络使用相同的HE参数集,而忽略了网络的容错能力。在本文中,为了快速,准确地进行安全的神经网络推断,我们提出了一种自动的分层参数选择器AutoPrivacy,该选择器利用深度强化学习为HPPNN中的每个线性层自动确定一组HE参数。基于学习的HE参数选择策略优于常规的基于规则的HE参数选择策略。与以前的HPPNN相比,自动隐私优化的HPPNN减少了推理延迟 53%〜70% 精度损失可忽略不计。 (阅读更多)