Visual Quality and Security Assessment of Perceptually Encrypted Images based on Multi-output Deep Neural Network (VSMML)
In this work, we propose a blind CNN-based visual security metric for perceptually encrypted images called VSMML metric. Given an encrypted image, our metric predicts two scores simultaneously, which correspond to the visual security (VS) and visual quality (VQ) scores, as illustrated in Figure below.
All experiments are carried out on Google Colab on Mac OS and the detailed results are given in the paper.
Two publicly perceptually encrypted image databases are used in our experiments:
- IVC-SelectEncrypt;
- Perceptually Encrypted Image Database (PEID);
This table shows the performance comparison on IVC-SelectEncrypt and PEID datasets with visual quality (VQ) and visual security (VS) scores.
Metrics |
IVC-SelectEncrypt | PEID (VQ) | PEID (VS) | ||||||
---|---|---|---|---|---|---|---|---|---|
SROCC | KROCC | PLCC | SROCC | KROCC | PLCC | SROCC | KROCC | PLCC | |
MSE | 0.916 | 0.775 | 0.683 | 0.811 | 0.748 | 0.890 | 0.800 | 0.603 | 0.810 |
PSNR | 0.916 | 0.775 | 0.910 | 0.813 | 0.646 | 0.869 | 0.797 | 0.613 | 0.835 |
SSIM | 0.851 | 0.689 | 0.718 | 0.825 | 0.670 | 0.843 | 0.850 | 0.677 | 0.829 |
FSIM | 0.975 | 0.876 | 0.896 | 0.890 | 0.752 | 0.911 | 0.858 | 0.685 | 0.880 |
GMSD | 0.968 | 0.849 | 0.955 | 0.801 | 0.646 | 0.898 | 0.754 | 0.578 | 0.858 |
MAD | 0.963 | 0.836 | 0.952 | 0.890 | 0.748 | 0.905 | 0.885 | 0.733 | 0.898 |
VIF | 0.959 | 0.832 | 0.955 | 0.924 | 0.797 | 0.968 | 0.926 | 0.787 | 0.945 |
NIQE | 0.731 | 0.547 | 0.496 | 0.459 | 0.335 | 0.327 | 0.524 | 0.383 | 0.528 |
BRISQUE | 0.663 | 0.485 | 0.685 | 0.352 | 0.251 | 0.339 | 0.436 | 0.305 | 0.459 |
ESS* | 0.957 | 0.839 | 0.948 | 0.816 | 0.671 | 0.922 | 0.771 | 0.599 | 0.891 |
LSS* | 0.953 | 0.823 | 0.943 | 0.798 | 0.628 | 0.767 | 0.770 | 0.591 | 0.751 |
LEG* | 0.887 | 0.723 | 0.898 | 0.845 | 0.681 | 0.900 | 0.848 | 0.666 | 0.882 |
LFBVS* | 0.895 | 0.726 | 0.872 | 0.634 | 0.486 | 0.751 | 0.630 | 0.466 | 0.730 |
NICE* | 0.908 | 0.759 | 0.631 | 0.824 | 0.486 | 0.651 | 0.593 | 0.437 | 0.617 |
LE* | 0.093 | 0.072 | 0.263 | 0.092 | 0.079 | 0.01 | 0.155 | 0.113 | 0.181 |
NSD* | 0.715 | 0.529 | 0.592 | 0.278 | 0.196 | 0.385 | 0.31 | 0.214 | 0.371 |
VSI-Canny* | 0.949 | 0.819 | 0.95 | 0.83 | 0.708 | 0.941 | 0.805 | 0.635 | 0.882 |
QETE* | 0.894 | 0.692 | 0.726 | 0.825 | 0.691 | 0.853 | 0.813 | 0.676 | 0.818 |
IIBVSI* | 0.968 | 0.848 | 0.966 | 0.642 | 0.451 | 0.753 | 0.878 | 0.719 | 0.893 |
TL-VGG16* | 0.943 | 0.798 | 0.969 | 0.892 | 0.743 | 0.935 | 0.935 | 0.788 | 0.933 |
VSMML (Our)* | 0.9828 | 0.9018 | 0.9794 | 0.9347 | 0.8065 | 0.915 | 0.9617 | 0.8445 | 0.9627 |
- Training:
To train the model on another database, refer to the fileTRAIN_ON_PEID.ipynb
orTRAIN_ON_IVC.ipynb
- Evaluation:
To evaluate the performance of our model, please refer to the fileTest_ON_PEID.ipynb
orTest_ON_IVC.ipynb
We have trained our models on 80% of datasets and you can find them in ./models
@inproceedings{fezza2021visual,
title={Visual Quality and Security Assessment of Perceptually Encrypted Images Based on Multi-Output Deep Neural Network},
author={Fezza, Sid Ahmed and Keita, Mamadou and Hamidouche, Wassim},
booktitle={2021 9th European Workshop on Visual Information Processing (EUVIP)},
pages={1--6},
year={2021},
organization={IEEE}
}