Abstract
Haze is a natural phenomenon that negatively affects image clarity and quality, posing challenges across various image-related applications. Traditional dehazing models often suffer from overfitting when trained on synthetic hazy-clean image pairs, which do not generalize well to real-world hazy conditions. To tackle this, recent methodologies have explored training models on unpaired data, better reflecting the variability encountered in natural scenes. This dual capability of CycleGAN is particularly beneficial for overcoming the overfitting issues associated with synthetic datasets. By incorporating CycleGAN into our DehazeDNet framework, we ensure that our dehazing model not only translates images effectively but also respects the physical characteristics of haze. Inspired by the D4 model, our approach includes a Depth Evaluation Block to estimate scene depth from images. Since haze density often correlates with scene depth, this depth information is crucial for accurate haze modeling. We utilize the U-Net architecture for the Depth Evaluation Block due to its proven efficiency in image-to-image translation tasks. To preserve the accuracy of the dehazed images, we incorporate an identity loss function into our model. Identity loss ensures that the dehazed output retains the essential characteristics of the input image. Our results demonstrate an increase in SSIM and PSNR compared to other unsupervised dehazing models, highlighting the efficiency of our method in maintaining image quality and details while removing haze.
Similar content being viewed by others
Data availability
The manuscript has no associated data.
References
Dong, W., Zhang, L., Shi, G., Wu, X.: Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011). https://doi.org/10.1109/TIP.2011.2108306
He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). https://doi.org/10.1109/TPAMI.2010.168
Kim, K.I., Kwon, Y.: Single-image super-resolution using sparse regression and natural image prior. IEEE Trans. Pattern Anal. Mach. Intell. 32(6), 1127–1133 (2010). https://doi.org/10.1109/TPAMI.2010.25
Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990). https://doi.org/10.1109/34.56205
Roth, S., Black, M.J.: Fields of experts: a framework for learning image priors. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, pp. 860–8672 (2005). https://doi.org/10.1109/CVPR.2005.160
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1–4), 259–268 (1992). https://doi.org/10.1016/0167-2789(92)90242-F
Zhu, S.-C., Mumford, D.: Prior learning and gibbs reaction-diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 19, 1236–1250 (1997)
Dai, T., Cai, J., Zhang, Y., Xia, S.-T., Zhang, L.: Second-order attention network for single image super-resolution. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11057–11066 (2019). https://doi.org/10.1109/CVPR.2019.01132
Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., Shi, W.: Photo-realistic single image super-resolution using a generative adversarial network (2017)
Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C.C., Luo, P.: Exploiting deep generative prior for versatile image restoration and manipulation (2020)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: CycleISP: real image restoration via improved data synthesis (2020)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning Enriched features for real image restoration and enhancement (2020)
Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017). https://doi.org/10.1109/TIP.2017.2662206
Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration (2017)
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration (2020)
Yang, Y., Wang, C., Liu, R., Zhang, L., Guo, X., Tao, D.: Self-augmented unpaired image dehazing via density and depth decomposition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2037–2046 (2022)
Shi, W., Liu, H., Liu, M.: Identity-sensitive loss guided and instance feature boosted deep embedding for person search. Neurocomputing 415, 1–14 (2020). https://doi.org/10.1016/j.neucom.2020.07.062
Aharon, M., Elad, M., Bruckstein, A.: K-svd: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311–4322 (2006). https://doi.org/10.1109/TSP.2006.881199
Luo, Y., Xu, Y., Ji, H.: Removing rain from a single image via discriminative sparse coding. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 3397–3405 (2015). https://doi.org/10.1109/ICCV.2015.388
Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2008). https://doi.org/10.1109/TIP.2007.911828
Chan, T.F., Wong, C.: Total variation blind deconvolution. IEEE Trans. Image Process. 7(3), 370–375 (1998). https://doi.org/10.1109/83.661187
Buades, A., Coll, B., Morel, J.-M.: A non-local algorithm for image denoising. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, pp. 60–652 (2005). https://doi.org/10.1109/CVPR.2005.38
Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007). https://doi.org/10.1109/TIP.2007.901238
Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. 27(3), 73 (2008). https://doi.org/10.1145/1360612.1360672
Xu, L., Zheng, S., Jia, J.: Unnatural l0 sparse representation for natural image deblurring. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1107–1114 (2013). https://doi.org/10.1109/CVPR.2013.147
Cai, B., Xu, X., Jia, K., Qing, C., Tao, D.: Dehazenet: an end-to-end system for single image haze removal. CoRR arXiv:1601.07661v2 [cs.CV] (2016)
Li, B., Peng, X., Wang, Z., Xu, J., Feng, D.: An all-in-one network for dehazing and beyond (2017)
Li, H., Li, J., Zhao, D., Xu, L.: Dehazeflow: Multi-scale conditional flow network for single image dehazing. In: Proceedings of the 29th ACM International Conference on Multimedia. MM ’21, pp. 2577–2585. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3474085.3475432
Su, Y.Z., He, C., Cui, Z.G., Li, A.H., Wang, N.: Physical model and image translation fused network for single-image dehazing. Pattern Recognit. 142, 109700 (2023). https://doi.org/10.1016/j.patcog.2023.109700
Wang, N., Cui, Z., Su, Y., He, C., Li, A.: Multiscale supervision-guided context aggregation network for single image dehazing. IEEE Signal Process. Lett. 29, 70–74 (2022). https://doi.org/10.1109/LSP.2021.3125272
Wang, N., Cui, Z., Su, Y., He, C., Lan, Y., Li, A.: Prior-guided multiscale network for single-image dehazing. IET Image Process. 15(13), 3368–3379 (2021). https://doi.org/10.1049/ipr2.12333
Wang, N., Cui, Z., Su, Y., Li, A.: RGNAM: recurrent grid network with an attention mechanism for single-image dehazing. J. Electron. Imaging 30(3), 033026 (2021). https://doi.org/10.1117/1.JEI.30.3.033026
Cui, Z., Wang, N., Su, Y., Zhang, W., Lan, Y., Li, A.: Ecanet: enhanced context aggregation network for single image dehazing. SIViP 17(2), 471–479 (2023). https://doi.org/10.1007/s11760-022-02252-w
Lan, Y., Cui, Z., Su, Y., Wang, N., Li, A., Zhang, W., Li, Q., Zhong, X.: Online knowledge distillation network for single image dehazing. Sci. Rep. 12(1), 14927 (2022). https://doi.org/10.1007/s11760-022-02252-w
Dudhane, A., Murala, S.: CDNet: single image de-hazing using unpaired adversarial training (2019). https://doi.org/10.1109/wacv.2019.00127
Li, B., Gou, Y., Gu, S., Liu, J.Z., Zhou, J.T., Peng, X.: You only look yourself: unsupervised and untrained single image dehazing neural network. Int. J. Comput. Vision 129(5), 1754–1767 (2021). https://doi.org/10.1007/s11263-021-01431-5
Liu, W., Hou, X., Duan, J., Qiu, G.: End-to-end single image fog removal using enhanced cycle consistent adversarial networks. IEEE Trans. Image Process. 29, 7819–7833 (2020). https://doi.org/10.1109/tip.2020.3007844
Atila, U., Ucar, M., Akyol, K., Ucar, E.: Plant leaf disease classification using EfficientNet deep learning model. Eco. Inform. 61, 101182 (2021). https://doi.org/10.1016/j.ecoinf.2020.101182
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation (2015)
Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z.: Multi-class generative adversarial networks with the L2 loss function. CoRR arXiv:1611.04076v1 [cs.CV] (2016)
Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., Wang, Z.: Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 28(1), 492–505 (2019)
Ancuti, C., Ancuti, C.O., Timofte, R., Vleeschouwer, C.D.: I-haze: a dehazing benchmark with real hazy and haze-free indoor images. In: International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS), pp. 620–631. Springer, Berlin (2018)
Fattal, R.: Dehazing using color-lines. ACM Trans. Graphics 34(1), 1–14 (2014)
He, N.K., Sun, N.J., Tang, N.X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). https://doi.org/10.1109/tpami.2010.168
Chen, Z., Wang, Y., Yang, Y., Liu, D.: PSD: principled synthetic-to-real dehazing guided by physical priors (2021). https://doi.org/10.1109/cvpr46437.2021.00710
Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2017). https://doi.org/10.1109/iccv.2017.244
Engin, D., Genc, A., Ekenel, H.K.: Cycle-dehaze: enhanced cyclegan for single image dehazing (2018). https://doi.org/10.1109/cvprw.2018.00127
Yang, X., Xu, Z., Luo, J.: Towards perceptual image dehazing by physics-based disentanglement and adversarial training. Proceedings of the ... AAAI Conference on Artificial Intelligence 32(1) (2018). https://doi.org/10.1609/aaai.v32i1.12317
Zhao, S., Zhang, L., Shen, Y., Zhou, Y.: Refinednet: a weakly supervised refinement framework for single image dehazing. IEEE Trans. Image Process. 30, 3391–3404 (2021). https://doi.org/10.1109/tip.2021.3060873
Yang, A., Liu, Y., Wang, J., Li, X., Cao, J., Ji, Z., Pang, Y.: Visual-quality-driven unsupervised image dehazing. Neural Netw. 167, 1–9 (2023). https://doi.org/10.1016/j.neunet.2023.08.010
Wen, Y., Gao, T., Zhang, J., Li, Z., Chen, T.: Encoder-free multiaxis physics-aware fusion network for remote sensing image dehazing. IEEE Trans. Geosci. Remote Sens. 61, 1–15 (2023). https://doi.org/10.1109/TGRS.2023.3325927
Li, J., Li, Y., Zhuo, L., Kuang, L., Yu, T.: USID-Net: unsupervised single image dehazing network via disentangled representations. IEEE Trans. Multimedia 25, 3587–3601 (2023). https://doi.org/10.1109/TMM.2022.3163554
Wang, X., Chen, X., Ren, W., Han, Z., Fan, H., Tang, Y., Liu, L.: Compensation atmospheric scattering model and two-branch network for single image dehazing. IEEE Trans. Emerg. Top. Comput. Intell. 8(4), 2880–2896 (2024). https://doi.org/10.1109/TETCI.2024.3386838
Acknowledgements
The authors express their gratitude to the Indian Institute of Information Technology Allahabad (IIIT-A), India, for the obtained financial support in performing this research work. This work is one of the outcomes of the project entitled "Deep Learning based Solutions for Vehicle Detection in Rainy and Foggy Climates under Smart City Environment" with sanction no. IIITA/RO/2022/409 dated 01.12.2022, sponsored by IIIT-A, Ministry of Education, India.
Funding
The work is sponsored by IIIT-A, Ministry of Education, India, for the project entitled "Deep Learning based Solutions for Vehicle Detection in Rainy and Foggy Climates under Smart City Environment" with sanction no. IIITA/RO/2022/409 dated 01.12.2022.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Rupesh, G., Singh, N. & Divya, T. DehazeDNet: image dehazing via depth evaluation. SIViP 18, 9387–9395 (2024). https://doi.org/10.1007/s11760-024-03553-y
Received:
Revised:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1007/s11760-024-03553-y