Multi-pooled Inception features for no-reference image quality assessment
Multi-pooled Inception features for no-reference image quality assessment
Image quality assessment (IQA) is an important element of a broad spectrum of applications ranging from automatic video streaming to display technology. Furthermore, the measurement of image quality requires a balanced investigation of image content and features.Our proposed approach extracts visual features by attaching global average pooling (GAP) layers to multiple Inception modules of on an ImageNet database pretrained convolutional neural network (CNN). In contrast to previous methods, we do not take patches from the input image. Instead, the input image is treated as a whole and is run through a pretrained CNN body to extract resolution-independent, multi-level deep features. As a consequence, our method can be easily generalized to any input image size and pretrained CNNs. Thus, we present a detailed parameter study with respect to the CNN base architectures and the effectiveness of different deep features. We demonstrate that our best proposal - called MultiGAP-NRIQA - is able to provide state-of-the-art results on three benchmark IQA databases. Furthermore, these results were also confirmed in a cross database test using the LIVE In the Wild Image Quality Challenge database.
无参考图像质量评估的多池初始功能
图像质量评估(IQA)是从自动视频流到显示技术的广泛应用的重要组成部分。此外,图像质量的测量需要对图像内容和特征进行均衡研究。.. 我们提出的方法通过将全局平均池(GAP)层附加到ImageNet数据库预训练卷积神经网络(CNN)上的多个Inception模块来提取视觉特征。与以前的方法相比,我们不从输入图像中提取色块。取而代之的是,将输入图像视为一个整体,并通过经过预训练的CNN主体运行,以提取与分辨率无关的多层深层特征。结果,我们的方法可以很容易地推广到任何输入图像尺寸和预训练的CNN。因此,我们提出了有关CNN基础架构以及不同深度功能有效性的详细参数研究。我们证明了我们的最佳建议-称为MultiGAP-NRIQA-能够在三个基准IQA数据库上提供最新的结果。此外, (阅读更多)