这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

Lightweight High-Speed Photography Built on Coded Exposure and Implicit Neural Representation of Videos

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The demand for compact cameras capable of recording high-speed scenes with high resolution is steadily increasing. However, achieving such capabilities often entails high bandwidth requirements, resulting in bulky, heavy systems unsuitable for low-capacity platforms. To address this challenge, leveraging a coded exposure setup to encode a frame sequence into a blurry snapshot and subsequently retrieve the latent sharp video presents a lightweight solution. Nevertheless, restoring motion from blur remains a formidable challenge due to the inherent ill-posedness of motion blur decomposition, the intrinsic ambiguity in motion direction, and the diverse motions present in natural videos. In this study, we propose a novel approach to address these challenges by combining the classical coded exposure imaging technique with the emerging implicit neural representation for videos. We strategically embed motion direction cues into the blurry image during the imaging process. Additionally, we develop a novel implicit neural representation based blur decomposition network to sequentially extract the latent video frames from the blurry image, leveraging the embedded motion direction cues. To validate the effectiveness and efficiency of our proposed framework, we conduct extensive experiments using benchmark datasets and real-captured blurry images. The results demonstrate that our approach significantly outperforms existing methods in terms of both quality and flexibility. The code for our work is available at https://github.com/zhihongz/BDINR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data Availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. Methods without open-source codes (Purohit et al., 2019; Zhang et al., 2020a; Argaw et al., 2021) are not included in the comparison.

References

  • Agrawal, A., & Raskar, R. (2009). Optimal single image capture for motion deblurring. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 2560–2567

  • Agrawal, A., & Xu, Y. (2009). Coded exposure deblurring: Optimized codes for PSF estimation and invertibility. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 2066–2073

  • Agrawal, A., Xu, Y., & Raskar, R. (2009). Invertible motion blur in video. In: ACM SIGGRAPH 2009 papers, ACM, pp 1–8

  • Argaw, D. M., Kim, J., Rameau, F., Zhang, C., & Kweon, I. S. (2021). Restoration of video frames from a single blurred image with motion understanding. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 701–710

  • Charbonnier, P., Blanc-Feraud, L., Aubert, G., & Barlaud, M. (1994). Two deterministic half-quadratic regularization algorithms for computed imaging. In: 1994 IEEE International Conference on Image Processing (ICIP), IEEE Comput. Soc. Press, vol 2, pp 168–172

  • Chen, H., Gu, J., Gallo, O., Liu, M. Y., Veeraraghavan, A., & Kautz, J. (2018). Reblur2Deblur: Deblurring videos via self-supervised learning. In: 2018 IEEE International Conference on Computational Photography (ICCP), IEEE, pp 1–9

  • Chen, H., He, B., Wang, H., Ren, Y., Lim, S. N., & Shrivastava, A. (2021). NeRV: Neural representations for videos. Advances in Neural Information Processing Systems, 34, 21557–21568.

    Google Scholar 

  • Chen, H., Gwilliam, M., Lim. S. N., & Shrivastava, A. (2023). HNeRV: A hybrid neural representation for videos. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  • Chen, Z., Chen, Y., Liu, J., Xu, X., Goel, V., Wang, Z., Shi, H., & Wang, X. (2022). VideoINR: Learning video implicit neural representation for continuous space-time super-resolution. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 2047–2057

  • Cui, G., Ye, X., Zhao, J., Zhu, L., Chen, Y., & Zhang, Y. (2021). An effective coded exposure photography framework using optimal fluttering pattern generation. Optics and Lasers in Engineering, 139, 106489.

    Article  Google Scholar 

  • Deng, C., Zhang, Y., Mao, Y., Fan, J., Suo, J., Zhang, Z., & Dai, Q. (2021). Sinusoidal sampling enhanced compressive camera for high speed imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(4), 1380–1393.

    Article  MATH  Google Scholar 

  • Dong, J., Ota, K., & Dong, M. (2023). Video frame interpolation: A comprehensive survey. ACM Transactions on Multimedia Computing, Communications, and Applications, 19(2s), 1–31.

    Article  MATH  Google Scholar 

  • Geng, Z., Liang, L., Ding, T., & Zharkov, I. (2022). Rstt: Real-time spatial temporal transformer for space-time video super-resolution. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 17441–17451

  • Harshavardhan, S., Gupta, S., & Venkatesh, K. S. (2013). Flutter shutter based motion deblurring in complex scenes. In: 2013 Annual IEEE India Conference (INDICON), IEEE, pp 1–6

  • Hitomi, Y., Gu, J., Gupta, M., Mitsunaga, T., & Nayar, S. K. (2011). Video from a single coded exposure photograph using a learned over-complete dictionary. In: 2011 International Conference on Computer Vision, IEEE, pp 287–294

  • Jeon, H. G., Lee, J. Y., Han, Y., Kim, S.J., & Kweon, I. S. (2015). Complementary sets of shutter sequences for motion deblurring. In: 2015 IEEE International Conference on Computer Vision (ICCV), IEEE, pp 3541–3549

  • Jeon, H. G., Lee, J. Y., Han, Y., Kim, S. J., & Kweon, I. S. (2017). Generating fluttering patterns with low autocorrelation for coded exposure imaging. International Journal of Computer Vision, 123(2), 269–286.

    Article  MATH  Google Scholar 

  • Jin, M., Meishvili, G., & Favaro, P. (2018). Learning to extract a video sequence from a single motion-blurred image. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 6334–6342

  • Jin, M., Hu. Z., & Favaro, P. (2019). Learning to extract flawless slow motion from blurry videos. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 8104–8113

  • Karras, T., Aittala, M., Laine, S., Härkönen, E., Hellsten, J., Lehtinen, J., & Aila, T. (2021). Alias-free generative adversarial networks. Advances in Neural Information Processing Systems, 34, 852–863.

    Google Scholar 

  • Ke, J., Wang, Q., Wang, Y., Milanfar, P., & Yang, F. (2021). MUSIQ: Multi-scale Image Quality Transformer. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, pp 5128–5137

  • Li, C., Guo, C., Han, L., Jiang, J., Cheng, M. M., Gu, J., & Loy, C. C. (2022). Low-light image and video enhancement using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12), 9396–9416. https://doi.org/10.1109/TPAMI.2021.3126387

    Article  MATH  Google Scholar 

  • Li, D., Bian, L., & Zhang, J. (2022). High-speed large-scale imaging using frame decomposition from intrinsic multiplexing of motion. IEEE Journal of Selected Topics in Signal Processing, 16(4), 700–712.

    Article  MATH  Google Scholar 

  • Li, Z., Wang, M., Pi, H., Xu, K., Mei, J., & Liu, Y. (2022c). E-NeRV: Expedite neural video representation with disentangled spatial-temporal context. In: Computer Vision—ECCV 2022, Springer Nature Switzerland, pp 267–284

  • Lin, S., Zhang, J., Pan, J., Jiang, Z., Zou, D., Wang, Y., Chen, J., & Ren. J. (2020). Learning event-driven video deblurring and interpolation. In: Computer Vision—ECCV 2020, Springer International Publishing, pp 695–710

  • Liu, D., Gu, J., Hitomi, Y., Gupta, M., Mitsunaga, T., & Nayar, S. K. (2014). Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(2), 248–260.

    Article  Google Scholar 

  • Llull, P., Liao, X., Yuan, X., Yang, J., Kittle, D., Carin, L., Sapiro, G., & Brady, D. J. (2013). Coded aperture compressive temporal imaging. Optics Express, 21(9), 10526–10545.

    Article  MATH  Google Scholar 

  • Loshchilov, I., & Hutter, F. (2017). SGDR: Stochastic gradient descent with warm restarts. In: 2017 International Conference on Learning Representations (ICLR), p 1

  • Mai, L., & Liu, F. (2022). Motion-adjustable neural implicit video representation. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 10738–10747

  • McCloskey, S. (2010). Velocity-dependent shutter sequences for motion deblurring. In: Computer Vision—ECCV 2010, Springer, pp 309–322

  • McCloskey, S., Ding, Y., & Yu, J. (2012). Design and estimation of coded exposure point spread functions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10), 2071–2077.

    Article  MATH  Google Scholar 

  • Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing scenes as neural radiance fields for view synthesis. In: Computer vision–ECCV 2020, Springer International Publishing, pp 405–421

  • Nah, S., Kim, T. H., Lee, K. M. (2017). Deep multi-scale convolutional neural network for dynamic scene deblurring. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 257–265

  • Nah, S., Son, S., Lee, J., & Lee, K. M. (2021). Clean images are hard to reblur: Exploiting the ill-posed inverse task for dynamic scene deblurring. In: 2021 International Conference on Learning Representations (ICLR).

  • Pan, L., Scheerlinck, C., Yu, X., Hartley, R., Liu, M., & Dai, Y. (2019). Bringing a blurry frame alive at high frame-rate with an event camera. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 6820–6829

  • Parihar, A. S., Varshney, D., Pandya, K., & Aggarwal, A. (2022). A comprehensive survey on video frame interpolation techniques. The Visual Computer, 38(1), 295–319.

    Article  Google Scholar 

  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., & Desmaison, A. (2019). PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32, 8024–8035.

    Google Scholar 

  • Pinkus, A. (1999). Approximation theory of the MLP model in neural networks. Acta Numerica, 8, 143–195.

    Article  MathSciNet  MATH  Google Scholar 

  • Purohit, K., Shah, A., & Rajagopalan, A. N. (2019). Bringing alive blurred moments. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 6830–6839

  • Qiu, J., Wang, X., Maybank, S. J., & Tao, D. (2019). World From Blur. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 8485–8496

  • Raskar, R., Agrawal, A., & Tumblin, J. (2006). Coded exposure photography: Motion deblurring using fluttered shutter. ACM Transactions on Graphics, 25(3), 795–804.

    Article  Google Scholar 

  • Rota, C., Buzzelli, M., Bianco, S., & Schettini, R. (2023). Video restoration based on deep learning: A comprehensive survey. Artificial Intelligence Review, 56(6), 5317–5364.

    Article  MATH  Google Scholar 

  • Rozumnyi, D., Oswald, M. R., Ferrari, V., Matas, J., & Pollefeys, M. (2021). DeFMO: Deblurring and shape recovery of fast moving objects. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 3456–3465

  • Sanghvi, Y., Gnanasambandam, A., Mao, Z., & Chan, S. H. (2022). Photon-limited blind deconvolution using unsupervised iterative kernel estimation. IEEE Transactions on Computational Imaging, 8, 1051–1062.

    Article  MathSciNet  Google Scholar 

  • Shangguan, W., Sun, Y., Gan, W., & Kamilov, U. S. (2022). Learning cross-video neural representations for high-quality frame interpolation. In: Computer Vision–ECCV 2022, Springer Nature Switzerland, pp 511–528.

  • Shedligeri, P. S. A., & Mitra, K. (2021). A unified framework for compressive video recovery from coded exposure techniques. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, pp 1599–1608

  • Shen, W., Bao, W., Zhai, G., Chen, L., Min, X., & Gao, Z. (2020). Blurry video frame interpolation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 5114–5123

  • Tancik, M., Srinivasan, P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J., & Ng, R. (2020). Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, Curran Associates Inc, 33, 7537–7547.

    Google Scholar 

  • Wang, Z., Bovik, A., Sheikh, H., & Simoncelli, E. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  MATH  Google Scholar 

  • Xie, X., Zhou, P., Li, H., Lin, Z., & Yan, S. (2023). Adan: Adaptive nesterov momentum algorithm for faster optimizing deep models. arXiv preprint arXiv:2208.06677

  • Yang, R., Xiao, T., Cheng, Y., Cao, Q., Qu, J., Suo, J., & Dai, Q. (2022). SCI: A spectrum concentrated implicit neural compression for biomedical data. arXiv preprint arXiv:2209.15180

  • Zhang, K., Luo, W., Stenger, B., Ren, W., Ma, L., & Li, H. (2020a). Every moment matters: Detail-aware networks to bring a blurry image alive. In: 28th ACM International Conference on Multimedia, ACM, pp 384–392.

  • Zhang, K., Ren, W., Luo, W., Lai, W. S., Stenger, B., Yang, M. H., & Li, H. (2022). Deep image deblurring: A survey. International Journal of Computer Vision, 130(9), 2103–2130.

    Article  MATH  Google Scholar 

  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp 586–595.

  • Zhang, W., Ma, K., Yan, J., Deng, D., & Wang, Z. (2020). Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network. IEEE Transactions on Circuits and Systems for Video Technology, 30(1), 36–47.

    Article  MATH  Google Scholar 

  • Zhang, Z., Deng, C., Liu, Y., Yuan, X., Suo, J., & Dai, Q. (2021). Ten-mega-pixel snapshot compressive imaging with a hybrid coded aperture. Photonics Research, 9(11), 2277–2287.

    Article  MATH  Google Scholar 

  • Zhang, Z., Cheng, Y., Suo, J., Bian, L., & Dai, Q. (2023). INFWIDE: Image and feature space wiener deconvolution network for non-blind image deblurring in low-light conditions. IEEE Transactions on Image Processing, 32, 1390–1402.

    Article  MATH  Google Scholar 

  • Zhang, Z., Dong, K., Suo, J., & Dai, Q. (2023). Deep coded exposure: End-to-end co-optimization of flutter shutter and deblurring processing for general motion blur removal. Photonics Research, 11(10), 1678.

    Article  MATH  Google Scholar 

  • Zhong, Z., Sun, X., Wu, Z., Zheng, Y., Lin, S., & Sato, I. (2022). Animation from Blur: Multi-modal blur decomposition with motion guidance. In: Computer Vision–ECCV 2022, Springer Nature Switzerland, pp 599–615

  • Zuckerman, L. P., Naor, E., Pisha, G., Bagon, S., & Irani, M. (2020). Across scales and across dimensions: Temporal super-resolution using deep internal learning. In: Computer Vision–ECCV 2020, Springer International Publishing, pp 52–68.

Download references

Acknowledgements

This work was supported by the Ministry of Science and Technology of the People’s Republic of China [grant number 2020AAA0108202] and the National Natural Science Foundation of China [grant numbers 61931012, 62088102].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinli Suo.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Communicated by Chen Change Loy.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Information

The supplementary material contains three videos that demonstrate the blur decomposition results of the proposed framework and its comparison with the competing methods. (Video 45,125KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Yang, R., Suo, J. et al. Lightweight High-Speed Photography Built on Coded Exposure and Implicit Neural Representation of Videos. Int J Comput Vis 133, 991–1011 (2025). https://doi.org/10.1007/s11263-024-02198-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11263-024-02198-1

Keywords