Abstract
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR.
Similar content being viewed by others
Notes
The downsampling scale factor of the autoencoder in Stable Diffusion is \(8\times \).
SR3 (Saharia et al., 2022b) is not included since its official code is unavailable.
We use the latest official model DF2K-JPEG.
We use the latest official SwinIR-GAN model, i.e., 003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth.
We use LPIPS-ALEX by default.
We do not use it by default, unless clarified.
References
Agustsson, E., & Timofte, R. (2017). Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
Avrahami, O., Lischinski, D., & Fried, O. (2022). Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Balaji, Y., Nah, S., Huang, X., Vahdat, A., Song, J., Kreis, K., Aittala, M., Aila, T., Laine, S., Catanzaro, B., Karras, T., & Liu, M. Y. (2022). ediff-i: Text-to-image diffusion models with ensemble of expert denoisers. arXiv preprint arXiv:2211.01324
Blau, Y., & Michaeli, T. (2018). The perception-distortion tradeoff. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Cai, J., Zeng, H., Yong, H., Cao, Z., & Zhang, L. (2019). Toward real-world single image super-resolution: A new benchmark and a new model. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Chan, K. C., Wang, X., Xu, X., Gu, J., & Loy, C. C. (2021). GLEAN: Generative latent bank for large-factor image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Chan, K. C., Wang, X., Xu, X., Gu, J., & Loy, C. C. (2022). GLEAN: Generative latent bank for large-factor image super-resolution and beyond. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
Chen, C., Shi, X., Qin, Y., Li, X., Han, X., Yang, T., & Guo, S. (2022). Real-world blind super-resolution via feature matching with implicit high-resolution priors. In Proceedings of the ACM international conference on multimedia (ACM MM).
Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., & Gao, W. (2021). Pre-trained image processing transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Choi, J., Kim, S., Jeong, Y., Gwon, Y., & Yoon, S. (2021). Ilvr: Conditioning method for denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Choi, J., Lee, J., Shin, C., Kim, S., Kim, H., & Yoon, S. (2022). Perception prioritized training of diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Chung, H., Sim, B., Ryu, D., & Ye, J. C. (2022). Improving diffusion models for inverse problems using manifold constraints. In Proceedings of advances in neural information processing systems (NeurIPS).
Dai, T., Cai, J., Zhang, Y., Xia, S. T., & Zhang, L. (2019). Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Deep-floyd. (2023). If. https://github.com/deep-floyd/IF
Dong, C., Loy, C. C., He, K., & Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
Dong, C., Loy, C. C., He, K., & Tang, X. (2015). Image super-resolution using deep convolutional networks. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
Dong, C., Loy, C. C., & Tang, X. (2016). Accelerating the super-resolution convolutional neural network. In Proceedings of the European conference on computer vision (ECCV)
Fang, G., Ma, X., & Wang, X. (2023). Structural pruning for diffusion models. In Proceedings of advances in neural information processing systems (NeurIPS).
Feng, W., He, X., Fu, T. J., Jampani, V., Akula, A., Narayana, P., Basu, S., Wang, X. E., & Wang, W. Y. (2023). Training-free structured diffusion guidance for compositional text-to-image synthesis. In Proceedings of international conference on learning representations (ICLR).
Fritsche, M., Gu, S., & Timofte, R. (2019). Frequency separation for real-world super-resolution. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
Gal, R., Arar, M., Atzmon, Y., Bermano, A. H., Chechik, G., & Cohen-Or, D. (2023). Designing an encoder for fast personalization of text-to-image models. arXiv preprint arXiv:2302.12228
Gu, J., Shen, Y., & Zhou, B. (2020). Image processing using multi-code gan prior. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Gu, S., Chen, D., Bao, J., Wen, F., Zhang, B., Chen, D., Yuan, L., & Guo, B. (2022). Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., & Timofte, R. (2019). Div8k: Diverse 8k resolution image dataset. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
He, X., Mo, Z., Wang, P., Liu, Y., Yang, M., & Cheng, J. (2019). Ode-inspired network design for single image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., & Cohen-Or, D. (2022). Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of advances in neural information processing systems (NeurIPS).
Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. In Proceedings of advances in neural information processing systems (NeurIPS) (vol. 33).
Ho, J., & Salimans, T. (2021). Classifier-free diffusion guidance. In Proceedings of advances in neural information processing systems (NeurIPS).
Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., & Le, Q. V. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2022). Lora: Low-rank adaptation of large language models. In Proceedings of international conference on learning representations (ICLR).
Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., & Van Gool, L. (2017). Dslr-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV)
Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., & Huang, F. (2020). Real-world super-resolution via kernel estimation and noise injection. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
Jiang, Y., Chan, K. C., Wang, X., Loy, C. C., & Liu, Z. (2021). Robust reference-based super-resolution via c2-matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Jiménez, Á. B. (2023). Mixture of diffusers for scene composition and high resolution image generation. arXiv preprint arXiv:2302.02412
Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. In Proceedings of advances in neural information processing systems (NeurIPS).
Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Ke, J., Wang, Q., Wang, Y., Milanfar, P., & Yang, F. (2021). Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Li, H., Yang, Y., Chang, M., Chen, S., Feng, H., Xu, Z., Li, Q., & Chen, Y. (2022). SRDiff: Single image super-resolution with diffusion probabilistic models. Neurocomputing, 6, 66.
Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., & Timofte, R. (2021). SwinIR: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
Liang, J., Zeng, H., & Zhang, L. (2022). Efficient and degradation-adaptive network for real-world image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
Lin, X., He, J., Chen, Z., Lyu, Z., Fei, B., Dai, B., Ouyang, W., Qiao, Y., & Dong, C. (2023). Diffbir: Towards blind image restoration with generative diffusion prior. arXiv preprint arXiv:2308.15070
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., & Zhu, J. (2022). Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In Proceedings of advances in neural information processing systems (NeurIPS).
Luo, S., Tan, Y., Huang, L., Li, J., & Zhao, H. (2023). Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378
Maeda, S. (2020). Unpaired image super-resolution using pseudo-supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Meng, X., & Kabashima, Y. (2022). Diffusion model based posterior sampling for noisy linear inverse problems. arXiv preprint arXiv:2211.12343
Menon, S., Damian, A., Hu, S., Ravi, N., & Rudin, C. (2020). Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Molad, E., Horwitz, E., Valevski, D., Acha, A. R., Matias, Y., Pritch, Y., Leviathan, Y., & Hoshen, Y. (2023). Dreamix: Video diffusion models are general video editors. arXiv preprint arXiv:2302.01329
Mou, C., Wang, X., Xie, L., Wu, Y., Zhang, J., Qi, Z., & Shan, Y. (2024). T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. In Proceedings of the AAAI conference on artificial intelligence.
Nichol, A. Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I., & Chen, M. (2022). Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In Proceedings of international conference on machine learning (ICML).
Oord, Avd., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748
Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C. C., & Luo, P. (2021). Exploiting deep generative prior for versatile image restoration and manipulation. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., & Rombach, R. (2023). Sdxl: Improving latent diffusion models for high-resolution image synthesis. In Proceedings of international conference on learning representations (ICLR).
Qi, C., Cun, X., Zhang, Y., Lei, C., Wang, X., Shan, Y., & Chen, Q. (2023). Fatezero: Fusing attentions for zero-shot text-based video editing. arXiv preprint arXiv:2303.09535
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., & Sutskever, I. (2021). Zero-shot text-to-image generation. In Proceedings of international conference on machine learning (ICML).
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention (MICCAI) (pp. 234–241). Springer.
Sahak, H., Watson, D., Saharia, C., & Fleet, D. (2023). Denoising diffusion probabilistic models for robust image super-resolution in the wild. arXiv preprint arXiv:2302.07864
Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., Ghasemipour, K., Gontijo Lopes, R., Karagol Ayan, B., Salimans, T., & Ho, J. (2022a). Photorealistic text-to-image diffusion models with deep language understanding. In Proceedings of advances in neural information processing systems (NeurIPS).
Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., & Norouzi, M. (2022b). Image super-resolution via iterative refinement. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
Sajjadi, M. S., Scholkopf, B., & Hirsch, M. (2017). Enhancenet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Salimans, T., & Ho, J. (2021). Progressive distillation for fast sampling of diffusion models. In Proceedings of international conference on learning representations (ICLR).
Sauer, A., Lorenz, D., Blattmann, A., & Rombach, R. (2023). Adversarial diffusion distillation. arXiv preprint arXiv:2311.17042
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of international conference on machine learning (ICML).
Song, J., Meng, C., & Ermon, S. (2020). Denoising diffusion implicit models. In Proceedings of international conference on learning representations (ICLR).
Song, J., Vahdat, A., Mardani, M., & Kautz, J. (2023a). Pseudoinverse-guided diffusion models for inverse problems. In Proceedings of international conference on learning representations (ICLR).
Song, Y., Dhariwal, P., Chen, M., & Sutskever, I. (2023b). Consistency models. arXiv preprint arXiv:2303.01469
Thorndike, E. L., et al. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 6, 66.
Timofte, R., Agustsson, E., Van Gool, L., Yang, M. H., & Zhang, L. (2017). Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the IEEE/CVF international conference on computer vision workshops (CVPR-W).
Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., & Wen, F. (2020). Bringing old photos back to life. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Wang, J., Chan, K. C., & Loy, C. C. (2023). Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI conference on artificial intelligence.
Wang, L., Wang, Y., Dong, X., Xu, Q., Yang, J., An, W., & Guo, Y. (2021a). Unsupervised degradation representation learning for blind super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Wang, X., Li, Y., Zhang, H., & Shan, Y. (2021b). Towards real-world blind face restoration with generative facial prior. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Wang, X., Xie, L., Dong, C., & Shan, Y. (2021c). Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF international conference on computer vision workshops (ICCV-W).
Wang, X., Yu, K., Dong, C., & Loy, C. C. (2018a). Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., & Loy, C. C. (2018b). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision workshops (ECCV-W).
Wang, Y., Yu, J., & Zhang, J. (2022). Zero-shot image restoration using denoising diffusion null-space model. In Proceedings of international conference on learning representations (ICLR).
Wei, P., Xie, Z., Lu, H., Zhan, Z., Ye, Q., Zuo, W., & Lin, L. (2020). Component divide-and-conquer for real-world image super-resolution. In Proceedings of the European conference on computer vision (ECCV).
Wei, Y., Gu, S., Li, Y., Timofte, R., & Jin, L., Song, H. (2021). Unsupervised real-world image super resolution via domain-distance aware training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Wu, J. Z., Ge, Y., Wang, X., Lei, S. W., Gu, Y., Hsu, W., Shan, Y., Qie, X., & Shou, M. Z. (2022). Tune-A-Video: One-shot tuning of image diffusion models for text-to-video generation. arXiv preprint arXiv:2212.11565
Xu, X., Ma, Y., & Sun, W. (2019). Towards real scene super-resolution with raw images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Xu, X., Sun, D., Pan, J., Zhang, Y., Pfister, H., & Yang, M. H. (2017). Learning to super-resolve blurry face and text images. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Yang, F., Yang, H., Fu, J., Lu, H., & Guo, B. (2020). Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Yang, S., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B. (2021a). Score-based generative modeling through stochastic differential equations. In Proceedings of international conference on learning representations (ICLR).
Yang, T., Ren, P., Xie, X., & Zhang, L. (2021b). Gan prior embedded network for blind face restoration in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Yu, F., Gu, J., Li, Z., Hu, J., Kong, X., Wang, X., He, J., Qiao, Y., & Dong, C. (2024). Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Yu, K., Dong, C., Lin, L., & Loy, C. C. (2018). Crafting a toolchain for image restoration by deep reinforcement learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Yue, Z., & Loy, C. C. (2022). Difface: Blind face restoration with diffused error contraction. arXiv preprint arXiv:2212.06512
Yue, Z., Wang, J., & Loy, C. C. (2023). Resshift: Efficient diffusion model for image super-resolution by residual shifting. In Proceedings of advances in neural information processing systems (NeurIPS).
Zhang, J., Lu, S., Zhan, F., & Yu, Y. (2021a). Blind image super-resolution via contrastive representation learning. arXiv preprint arXiv:2107.00708
Zhang, K., Liang, J., Van Gool, L., & Timofte, R. (2021b). Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Zhang, L., Rao, A., & Agrawala, M. (2023). Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018a). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., & Fu, Y. (2018b). Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV).
Zhang, Z., Wang, Z., Lin, Z., & Qi, H. (2019). Image super-resolution by neural texture transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Zhao, Y., Su, Y. C., Chu, C. T., Li, Y., Renn, M., Zhu, Y., Chen, C., & Jia, X. (2022). Rethinking deep face restoration. In CVPR.
Zheng, H., Ji, M., Wang, H., Liu, Y., & Fang, L. (2018). Crossnet: An end-to-end reference-based super resolution network using cross-scale warping. In Proceedings of the European conference on computer vision (ECCV).
Zhou, S., Chan, K. C., Li, C., & Loy, C. C. (2022). Towards robust blind face restoration with codebook lookup transformer. In Proceedings of advances in neural information processing systems (NeurIPS).
Zhou, S., Zhang, J., Zuo, W., & Loy, C. C. (2020). Cross-scale internal graph neural network for image super-resolution. In Proceedings of advances in neural information processing systems (NeurIPS).
Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV).
Acknowledgements
This study is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2022-01-033[T]), RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). We sincerely thank Yi Li for providing valuable advice and building the WebUI implementation (https://github.com/pkuliyi2015/sd-webui-stablesr) of our work. We also thank the continuous interest and contributions from the community.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Boxin Shi.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, J., Yue, Z., Zhou, S. et al. Exploiting Diffusion Prior for Real-World Image Super-Resolution. Int J Comput Vis 132, 5929–5949 (2024). https://doi.org/10.1007/s11263-024-02168-7
Received:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1007/s11263-024-02168-7