这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

CD-iNet: Deep Invertible Network for Perceptual Image Color Difference Measurement

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Image color difference (CD) measurement, a crucial concept in color science and imaging technology, aims to quantify the perceived difference between two colors. Most widely recognized CD formulae are recommended by the Commission Internationale de l’Èclairage (CIE), which are tailored to homogeneous color patches and may not generalize effectively to images encompassing diverse content. Developing effective CD metrics for natural images remains an active and ongoing area of research. Drawing inspiration from the design principles found in CIE-recommended formulae, which place a premium on achieving a perceptually uniform color space, we posit that an ideal color space should adhere to the following criteria: (1) Characterizing any color pixel with three degrees of freedom, which is necessary and sufficient; (2) The visual distance between two pixels is proportional to the Euclidean distance, i.e., perceptual uniformity; (3) The transformation between color spaces is inherently reversible and has low computational complexity. To satisfy these criteria, we investigate to leverage deep invertible neural network (DINNs) to learn an invertible coordinate transform, in which the Euclidean distance is employed to compute the CD on a pixel-by-pixel basis within the transformed color space and subsequently average the resulting CD map to obtain the global CD for a pair of images. By using DINNs, the acquired coordinate transform can maintain three-dimensional properties and mathematical invertibility. The resulting metric, referred to as CD-iNet, is end-to-end optimized on color patch datasets and image datasets simultaneously. Extensive quantitative and qualitative experiments on smartphone photograph datasets demonstrate the superiority of CD-iNet over existing metrics. Besides, CD-iNet can produce competitive local CD maps without requiring dense supervision and be robust against geometric distortions. More importantly, the transformed color space exhibits reasonable characteristics of perceptual uniformity, e.g., low cross-contamination between color attributes. Codes are available at: https://github.com/hellooks/CD-iNet.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. https://telin.ugent.be/~bortiz/color_new

References

  • Abasi, S., Amani Tehran, M., & Fairchild, M. D. (2020). Distance metrics for very large color differences. Color Research & Application, 45(2), 208–223.

    Google Scholar 

  • Alakuijala, J., Obryk, R., Stoliarchuk, O., Szabadka, Z., Vandevenne, L., & Wassenberg, J. (2017). Guetzli: Perceptually guided JPEG encoder. arXiv preprint arXiv:1703.04421

  • Alman, D. H., Berns, R. S., Snyder, G. D., & Larsen, W. A. (1989). Performance testing of color-difference metrics using a color tolerance dataset. Color Research & Application, 14(3), 139–151.

    Google Scholar 

  • Andersson, P., Nilsson, J., Akenine-Möller, T., Oskarsson, M., Åström, K., & Fairchild, M. D. (2020). FLIP: A difference evaluator for alternating images. ACM on Computer Graphics and Interactive Techniques, 3(2), 1–23.

    Google Scholar 

  • Apostol, T. M. (1974). Mathematical analysis (2nd ed.). Addison-Wesley Publishing Company Inc.

  • Behrmann, J., Vicol, P., Wang, K. C., Grosse, R., & Jacobsen, J. H. (2021). Understanding and mitigating exploding inverses in invertible neural networks. In International conference on artificial intelligence and statistics (pp. 1792–1800).

  • Berns, R. S., Alman, D. H., Reniff, L., Snyder, G. D., & Balonon-Rosen, M. R. (1991). Visual determination of suprathreshold color-difference tolerances using probit analysis. Color Research & Application, 16(5), 297–316.

    Google Scholar 

  • British Standards Institution. (1998). Method for calculation of small colour differences. American National Standards Institute.

  • Bujack, R., Teti, E., Miller, J., Caffrey, E., & Turton, T. L. (2022). The non-Riemannian nature of perceptual color space. Proceedings of the National Academy of Sciences, 119(18), e2119753119.

    MathSciNet  Google Scholar 

  • Chen, H., Wang, Z., Yang, Y., Sun, Q., & Ma, K. (2023). Learning a deep color difference metric for photographic images. In IEEE conference on computer vision and pattern recognition (to appear).

  • Chen, Q., Xu, J., & Koltun, V. (2017). Fast image processing with fully-convolutional networks. In IEEE international conference on computer vision (pp. 2497–2506).

  • Chen, J., Cranton, W., & Fihn, M. (2016). Handbook of visual display technology. Elsevier.

    Google Scholar 

  • Chou, C. H., & Liu, K. C. (2007). A fidelity metric for assessing visual quality of color images. In International conference on computer communications and networks (pp. 1154–1159).

  • Choudhury, A. K. R. (2014). Principles of colour and appearance measurement: Object appearance, colour perception and instrumental measurement. Elsevier.

    Google Scholar 

  • Choudhury, A., Wanat, R., Pytlarz, J., & Daly, S. (2021). Image quality evaluation for high dynamic range and wide color gamut applications using visual spatial processing of color differences. Color Research & Application, 46(1), 46–64.

    Google Scholar 

  • Clarke, F. J., McDonald, R., & Rigg, B. (1984). Modification to the JPC79 colour-difference formula. Journal of the Society of Dyers and Colourists, 100(4), 128–132.

    Google Scholar 

  • Ding, K., Ma, K., Wang, S., & Simoncelli, E. P. (2020). Image quality assessment: Unifying structure and texture similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(5), 2567–2581.

    Google Scholar 

  • Dinh, L., Krueger, D., & Bengio, Y. (2015). NICE: Non-linear independent components estimation. In International conference on representations workshops (pp. 1–13).

  • Dinh, L., Sohl-Dickstein, J., & Bengio, S. (2017). Density estimation using real NVP. In International conference on learning representations (pp. 1–12).

  • Etmann, C., Ke, R., & Schönlieb, C. B. (2020). iUNets: Learnable invertible up- and downsampling for large-scale inverse problems. In IEEE international workshop on machine learning for signal processing (pp. 1–6).

  • Federer, H. (2014). Geometric measure theory. Springer.

    Google Scholar 

  • Garcia, P. A., Huertas, R., Melgosa, M., & Cui, G. (2007). Measurement of the relationship between perceived and computed color differences. Journal of the Optical Society of America A, 24(7), 1823–1829.

    Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (pp. 770–778).

  • Ho, J., Chen, X., Srinivas, A., Duan, Y., & Abbeel, P. (2019). Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International conference on machine learning (pp. 2722–2730).

  • Hong, G., & Luo, M. R. (2006). New algorithm for calculating perceived colour difference of images. The Imaging Science Journal, 54(2), 86–91.

    Google Scholar 

  • Hunt, R. W. G. (1952). Light and dark adaptation and the perception of color. Journal of the Optical Society of America, 42(3), 190–199.

    Google Scholar 

  • Imai, F. H., Tsumura, N., & Miyake, Y. (2001). Perceptual color difference metric for complex images based on Mahalanobis distance. Journal of Electronic Imaging, 10(2), 385–393.

    Google Scholar 

  • Jing, J., Deng, X., Xu, M., Wang, J., & Guan, Z. (2021). HiNet: Deep image hiding by invertible network. In IEEE international conference on computer vision (pp. 4733–4742).

  • Johnson, G. M., & Fairchild, M. D. (2003). A top down description of S-CIELAB and CIEDE2000. Color Research & Application, 28(6), 425–435.

    Google Scholar 

  • Kanizsa, G. (1979). Organization in vision: Essays on Gestalt perception. Praeger Publishers.

  • Kim, D. H. (1997). New weighting functions for the modified CIELAB colour-difference formulae. Textile Coloration and Finishing, 9(6), 51–57.

    Google Scholar 

  • Kingma, D. P., & Dhariwal, P. (2018). Glow: Generative flow with invertible 1x1 convolutions. Neural Information Processing Systems, 31, 1–11.

    Google Scholar 

  • Lee, D., & Plataniotis, K. N. (2014). Towards a novel perceptual color difference metric using circular processing of hue components. In IEEE international conference on acoustics, speech, & signal processing (pp. 166–170).

  • Lee, B. B. (2008). The evolution of concepts of color vision. Neurociencias, 4(4), 209.

    Google Scholar 

  • Lee, S., Xin, J. H., & Westland, S. (2005). Evaluation of image similarity by histogram intersection. Color Research & Application, 30(4), 265–274.

    Google Scholar 

  • Li, C., Anwar, S., Hou, J., Cong, R., Guo, C., & Ren, W. (2021a). Underwater image enhancement via medium transmission-guided multi-color space embedding. IEEE Transactions on Image Processing.

  • Li, C., Guo, C., Han, L., Jiang, J., Cheng, M. M., Gu, J., & Loy, C. C. (2021b). Low-light image and video enhancement using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Li, C., Li, Z., Wang, Z., Xu, Y., Luo, M. R., Cui, G., Melgosa, M., & Pointer, M. (2016). A revision of CIECAM02 and its CAT and UCS. In Color and imaging conference (pp. 208–212).

  • Li, C., Guo, C., Guo, J., Han, P., Fu, H., & Cong, R. (2019). PDR-Net: Perception-inspired single image dehazing network with refinement. IEEE Transactions on Multimedia, 22(3), 704–716.

    Google Scholar 

  • Lissner, I., Preiss, J., Urban, P., Lichtenauer, M. S., & Zolliker, P. (2012). Image-difference prediction: From grayscale to color. IEEE Transactions on Image Processing, 22(2), 435–446.

    MathSciNet  Google Scholar 

  • Lissner, I., & Urban, P. (2011). Toward a unified color space for perception-based image processing. IEEE Transactions on Image Processing, 21(3), 1153–1168.

    MathSciNet  Google Scholar 

  • Logvinenko, A. D. (2015). The geometric structure of color. Journal of Vision, 15(1), 16–16.

    Google Scholar 

  • Lugmayr, A., Danelljan, M., Van Gool, L., & Timofte, R. (2020). SRFlow: Learning the super-resolution space with normalizing flow. In European conference of computer vision (pp. 715–732).

  • Luo, M. R. (1999). Colour science: Past, present and future. In Colour imaging vision and technology (pp. 381–404).

  • Luo, M. R., Cui, G., & Rigg, B. (2001). The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Research & Application, 26(5), 340–350.

    Google Scholar 

  • Luo, M. R., & Li, C. (2013). CIECAM02 and its recent developments. Advanced Color Image Processing and Analysis, 8, 19–58.

    Google Scholar 

  • Luo, M. R., & Rigg, B. (1986). Chromaticity-discrimination ellipses for surface colours. Color Research & Application, 11(1), 25–42.

    Google Scholar 

  • Luo, M. R., & Rigg, B. (1987). BFD(\(l\):\(c\)) colour-difference formula. Journal of the Society of Dyers and Colourists, 103(2), 86–94.

    Google Scholar 

  • Mahy, M., Van Eycken, L., & Oosterlinck, A. (1994). Evaluation of uniform color spaces developed after the adoption of CIELAB and CIELUV. Color Research & Application, 19(2), 105–121.

    Google Scholar 

  • Mallat, S. (1999). A wavelet tour of signal processing. Elsevier.

    Google Scholar 

  • Mantiuk, R., Kim, K. J., Rempel, A. G., & Heidrich, W. (2011). HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Transactions on Graphics, 30(4), 1–14.

    Google Scholar 

  • McDonald, R. (1980). Industrial pass/fail colour matching. Journal of the Society of Dyers and Colourists, 96(7), 372–376.

    Google Scholar 

  • McDonald, R., & Smith, K. (1995). CIE94-A new colour-difference formula. Journal of the Society of Dyers and Colourists, 111(12), 376–379.

    Google Scholar 

  • Melgosa, M., Hita, E., Poza, A., Alman, D. H., & Berns, R. S. (1997). Suprathreshold color-difference ellipsoids for surface colors. Color Research & Application, 22(3), 148–155.

    Google Scholar 

  • Ortiz-Jaramillo, B., Kumcu, A., Platisa, L., & Philips, W. (2019). Evaluation of color differences in natural scene color images. Signal Processing: Image Communication, 71, 128–137.

    Google Scholar 

  • Ouni, S., Zagrouba, E., Chambah, M., & Herbin, M. (2008). A new spatial colour metric for perceptual comparison. In International conference on e-systems engineering, communication and information (pp. 413–428).

  • Pascale, D. (2003). A review of RGB color spaces... from xyY to R’G’B’. Babel Color, 18, 136–152.

    Google Scholar 

  • Pedersen, M., & Hardeberg, J. Y. (2012). A new spatial filtering based image difference metric based on hue angle weighting. Journal of Imaging Science & Technology, 56(5), 1–12.

    Google Scholar 

  • Pinson, M. H., & Wolf, S. (2004). A new standardized method for objectively measuring video quality. IEEE Transactions on Broadcasting, 50(3), 312–322.

    Google Scholar 

  • Ponomarenko, N., Ieremeiev, O., Lukin, V., Egiazarian, K., & Carli, M. (2011). Modified image visual quality metrics for contrast change and mean shift accounting. In International conference the experience of designing and application of CAD systems in microelectronics (pp. 305–311).

  • Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J., Vozel, B., Chehdi, K., Carli, M., Battisti, F., & Kuo, C. C. J. (2015). Image database TID2013: Peculiarities, results and perspectives. Signal processing: Image communication, 30, 57–77.

    Google Scholar 

  • Poynton, C. (2012). Digital video and HD: Algorithms and Interfaces. Elsevier.

  • Prashnani, E., Cai, H., Mostofi, Y., & Sen, P. (2018). PieAPP: Perceptual image-error assessment through pairwise preference. In IEEE conference on computer vision and pattern recognition (pp. 1808–1817).

  • Robertson, A. R. (1977). The CIE 1976 color-difference formulae. Color Research & Application, 2(1), 7–11.

    Google Scholar 

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 234–241).

  • Safdar, M., Cui, G., Kim, Y. J., & Luo, M. R. (2017). Perceptually uniform color space for image signals including high dynamic range and wide gamut. Optics Express, 25(13), 15131–15151.

    Google Scholar 

  • Salimans, T., & Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in Neural Information Processing Systems, 29, 901–909.

    Google Scholar 

  • Shapley, R., & Hawken, M. J. (2011). Color in the cortex: Single-and double-opponent cells. Vision Research, 51(7), 701–717.

    Google Scholar 

  • Simone, G., Oleari, C., & Farup, I. (2009). An alternative color difference formula for computing image difference. Gjøvik Color Imaging Symposium, 4, 8–11.

    Google Scholar 

  • Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In International conference on learning representations.

  • Smith, A. R. (1978). Color gamut transform pairs. ACM SIGGRAPH, 12(3), 12–19.

    Google Scholar 

  • Smith, T., & Guild, J. (1931). The CIE colorimetric standards and their use. Transactions of the Optical Society, 33(3), 73.

    Google Scholar 

  • Toet, A., & Lucassen, M. P. (2003). A new universal colour image fidelity metric. Displays, 24(4–5), 197–207.

    Google Scholar 

  • Tominaga, S. (1996). A color mapping method for CMYK printers and its evaluation. In Color and imaging conference (pp. 172–175).

  • Van der Ouderaa, T. F., & Worrall, D. E. (2019). Reversible GANs for memory-efficient image-to-image translation. In IEEE conference on computer vision and pattern recognition (pp. 4720–4728).

  • Virmaux, A., & Scaman, K. (2018). Lipschitz regularity of deep neural networks: Analysis and efficient estimation. Advances in Neural Information Processing Systems, 31, 1–10.

    Google Scholar 

  • Wang, Z., Xu, K., Yang, Y., Dong, J., Gu, S., Xu, L., Fang, Y., & Ma, K. (2023). Measuring perceptual color differences of smartphone photographs. IEEE Transactions on Pattern Analysis and Machine Intelligence (to appear).

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Google Scholar 

  • Witt, K. (1999). Geometric relations between scales of small colour differences. Color Research & Application, 24(2), 78–92.

    Google Scholar 

  • Wuerger, S. M., Maloney, L. T., & Krauskopf, J. (1995). Proximity judgments in color space: Tests of a euclidean color geometry. Vision Research, 35(6), 827–835.

    Google Scholar 

  • Xiao, M., Zheng, S., Liu, C., Wang, Y., He, D., Ke, G., Bian, J., Lin, Z., & Liu, T. Y. (2020). Invertible image rescaling. In European conference on computer vision (pp. 126–144).

  • Xing, Y., Qian, Z., & Chen, Q. (2021). Invertible image signal processing. In IEEE conference on computer vision and pattern recognition (pp. 6287–6296).

  • Xu, K., Wang, Z., Yang, Y., Dong, J., Xu, L., Fang, Y., &, Ma, K. (2022). A database of visual color differences of modern smartphone photography. In IEEE international conference on image processing (pp. 3758–3762).

  • Yu, M., Liu, H., Guo, Y., & Zhao, D. (2009). A method for reduced-reference color image quality assessment. In International conference on image and signal processing (pp. 1–5).

  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In IEEE conference on computer vision and pattern recognition (pp. 586–595).

  • Zhang, Z., Zheng, H., Hong, R., Xu, M., Yan, S., & Wang, M. (2022). Deep color consistent network for low-light image enhancement. In IEEE conference on computer vision and pattern recognition (pp. 1899–1908).

  • Zhang, L., Shen, Y., & Li, H. (2014). VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing, 23(10), 4270–4281.

  • Zhang, X., & Wandell, B. A. (1997). A spatial extension of CIELAB for digital color-image reproduction. Journal of the Society for Information Display, 5(1), 61–63.

    Google Scholar 

  • Zhao, S., Zhang, Z., Hong, R., Xu, M., Zhang, H., Wang, M., & Yan, S. (2022). CRNET: Unsupervised color retention network for blind motion deblurring. In ACM international conference on multimedia (pp. 6193–6201).

Download references

Acknowledgements

This work was supported in part by the National Key Research and Development Program of China (2023YFE0210700), in part by the Natural Science Foundation of China (62301323, 62271277), in part by the Natural Science Foundation of Zhejiang (LR22F020002), and in part by the Shenzhen Natural Science Foundation (20231128191435002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhihua Wang.

Additional information

Communicated by Chongyi Li.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Xu, K., Ding, K. et al. CD-iNet: Deep Invertible Network for Perceptual Image Color Difference Measurement. Int J Comput Vis 132, 5983–6003 (2024). https://doi.org/10.1007/s11263-024-02087-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11263-024-02087-7

Keywords

Profiles

  1. Zhangkai Ni