这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

Improving Domain Adaptation Through Class Aware Frequency Transformation

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

In this work, we explore the usage of the frequency transformation for reducing the domain shift between the source and target domain (e.g., synthetic image and real image respectively) towards solving the domain adaptation task. Most of the unsupervised domain adaptation (UDA) algorithms focus on reducing the global domain shift between labelled source and unlabelled target domains by matching the marginal distributions under a small domain gap assumption. UDA performance degrades for the cases where the domain gap between source and target distribution is large. In order to bring the source and the target domains closer, we propose a novel approach based on traditional image processing technique Class Aware Frequency Transformation (CAFT) that utilizes pseudo label based class consistent low-frequency swapping for improving the overall performance of the existing UDA algorithms. The proposed approach, when compared with the state-of-the-art deep learning based methods, is computationally more efficient and can easily be plugged into any existing UDA algorithm to improve its performance. Additionally, we introduce a novel approach based on absolute difference of top-2 class prediction probabilities for filtering target pseudo labels into clean and noisy sets. Samples with clean pseudo labels can be used to improve the performance of unsupervised learning algorithms. We name the overall framework as CAFT++. We evaluate the same on the top of different UDA algorithms across many public domain adaptation datasets. Our extensive experiments indicate that CAFT++ is able to achieve significant performance gains across all the popular benchmarks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  • Cao, Z., Long, M., Wang, J., & Jordan, M. I. (2018a). Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2724–2732).

  • Cao, Z., Ma, L., Long, M., & Wang, J. (2018b). Partial adversarial domain adaptation. In Proceedings of the European conference on computer vision (ECCV) (pp. 135–150).

  • Chen, C., Xie, W., Huang, W., Rong, Y., Ding, X., Huang, Y. et al. (2019). Progressive feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 627–636).

  • Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In International conference on machine learning (pp. 1180–1189). PMLR.

  • Gao, Z., Zhang, S., Huang, K., Wang, Q., & Zhong, C. (2021). Gradient distribution alignment certificates better adversarial domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 8937–8946).

  • Gawlikowski, J., Tassi, C. R. N., Ali, M., Lee, J., Humt, M., Feng, J., et al. (2021). A survey of uncertainty in deep neural networks. arXiv preprint arXiv:2107.03342

  • Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016). Deep reconstruction-classification networks for unsupervised domain adaptation. In European conference on computer vision (pp. 597–613). Springer.

  • Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., & Smola, A. (2006). A kernel method for the two-sample-problem. Advances in Neural Information Processing Systems, 19, 513–520.

    MATH  Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).

  • Huang, X. & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision (pp. 1501–1510).

  • Jin, X., Lan, C., Zeng, W., & Chen, Z. (2021). Re-energizing domain discriminator with sample relabeling for adversarial domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9174–9183).

  • Jin, Y., Wang, X., Long, M., & Wang, J. (2020). Minimum class confusion for versatile domain adaptation. In European conference on computer vision (pp. 464–480). Springer.

  • Kang, G., Jiang, L., Yang, Y., & Hauptmann, A. G. (2019). Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4893–4902).

  • Kumar, V., Srivastava, S., Lal, R., & Chakraborty, A. (2021). Caft: Class aware frequency transform for reducing domain gap. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV) workshops (pp. 2525–2534).

  • Lal, R., Gaur, A., Iyer, A., Shaikh, M. A., Agrawal, R., & Chiddarwar, S. (2021). Open-set multi-source multi-target domain adaptation.

  • Lee, D.-H. et al. (2013). Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML (Vol. 3, p. 896).

  • Li, H., Krček, M., & Perin, G. (2020a). A comparison of weight initializers in deep learning-based side-channel analysis. In International conference on applied cryptography and network security (pp. 126–143). Springer.

  • Li, J., Socher, R., & Hoi, S. C. (2020b). Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394

  • Li, S., Xie, M., Lv, F., Liu, C. H., Liang, J., Qin, C., & Li, W. (2021). Semantic concentration for domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9102–9111).

  • Liang, J., Hu, D., & Feng, J. (2020). Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning (pp. 6028–6039). PMLR.

  • Liang, J., Hu, D., & Feng, J. (2021). Domain adaptation with auxiliary target domain-oriented classifier. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16632–16642).

  • Long, M., Cao, Y., Wang, J., & Jordan, M. (2015). Learning transferable features with deep adaptation networks. In International conference on machine learning (pp. 97–105). PMLR.

  • Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2017a). Conditional adversarial domain adaptation. arXiv preprint arXiv:1705.10667.

  • Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2018). Conditional adversarial domain adaptation. Advances in neural information processing systems, 31.

  • Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2018). Conditional adversarial domain adaptation. In Proceedings of the International Conference on Neural Information Processing Systems (pp. 1647–1657).

  • O’Mahony, N., Campbell, S., Carvalho, A., Harapanahalli, S., Hernandez, G. V., Krpalkova, L., Riordan, D., & Walsh, J. (2019). Deep learning vs. traditional computer vision. In Science and information conference (pp. 128–144). Springer.

  • Pan, Y., Yao, T., Li, Y., Wang, Y., Ngo, C.-W., & Mei, T. (2019). Transferrable prototypical networks for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2239–2247).

  • Panareda Busto, P. & Gall, J. (2017). Open set domain adaptation. In Proceedings of the IEEE international conference on computer vision (pp. 754–763).

  • Pei, Z., Cao, Z., Long, M., & Wang, J. (2018). Multi-adversarial domain adaptation. In Thirty-second AAAI conference on artificial intelligence.

  • Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924

  • Perez, L. & Wang, J. (2017). The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621

  • Reynolds, D. A. (2009). Gaussian mixture models. Encyclopedia of biometrics, 741, 659–663.

    Article  Google Scholar 

  • Roy, S., Krivosheev, E., Zhong, Z., Sebe, N., & Ricci, E. (2021). Curriculum graph co-teaching for multi-target domain adaptation.

  • Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In European conference on computer vision (pp. 213–226). Springer.

  • Saito, K., Ushiku, Y., & Harada, T. (2017). Asymmetric tri-training for unsupervised domain adaptation. In International conference on machine learning (pp. 2988–2997). PMLR.

  • Saito, K., Watanabe, K., Ushiku, Y., &Harada, T. (2018a). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 3723–3732).

  • Saito, K., Yamamoto, S., Ushiku, Y., & Harada, T. (2018b). Open set domain adaptation by backpropagation. In Proceedings of the European conference on computer vision (ECCV) (pp. 153–168).

  • Sun, B., Feng, J., & Saenko, K. (2016). Return of frustratingly easy domain adaptation. In Proceedings of the AAAI conference on artificial intelligence (Vol. 30).

  • Sun, B. & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision (pp. 443–450). Springer.

  • Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., & Liu, C. (2018). A survey on deep transfer learning. In International conference on artificial neural networks (pp. 270–279). Springer.

  • Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7167–7176).

  • Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., & Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474

  • Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5018–5027).

  • Wang, Q., Meng, F., & Breckon, T. P. (2020). Data augmentation with norm-vae for unsupervised domain adaptation. arXiv preprint arXiv:2012.00848

  • Wang, Y., Peng, J., & Zhang, Z. (2021). Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9092–9101).

  • Wang, Z., Dai, Z., Póczos, B., & Carbonell, J. (2019). Characterizing and avoiding negative transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11293–11302).

  • Wilson, G., & Cook, D. J. (2020). A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology (TIST), 11(5), 1–46.

    Article  Google Scholar 

  • Yang, Y. & Soatto, S. (2020). Fda: Fourier domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4085–4095).

  • Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2018a). mixup: Beyond empirical risk minimization.

  • Zhang, J., Ding, Z., Li, W., & Ogunbona, P. (2018b). Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8156–8164).

  • Zhang, W., Ouyang, W., Li, W., & Xu, D. (2018c). Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3801–3809).

  • Zhang, Y., Liu, T., Long, M., & Jordan, M. (2019). Bridging theory and algorithm for domain adaptation. In International conference on machine learning (pp. 7404–7413). PMLR.

  • Zhao, H., Des Combes, R. T., Zhang, K., & Gordon, G. (2019). On learning invariant representations for domain adaptation. In International conference on machine learning (pp. 7523–7532). PMLR.

  • Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232).

  • Zhu, Y., Zhuang, F., Wang, J., Ke, G., Chen, J., Bian, J., Xiong, H., & He, Q. (2020). Deep subdomain adaptation network for image classification. IEEE Transactions on Neural Networks and Learning Systems.

Download references

Acknowledgements

This work is partially supported by a Young Scientist Research Award (Sanction no. 59/20/11/2020-BRNS) to Anirban Chakraborty from DAE-BRNS, India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anirban Chakraborty.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, V., Patil, H., Lal, R. et al. Improving Domain Adaptation Through Class Aware Frequency Transformation. Int J Comput Vis 131, 2888–2907 (2023). https://doi.org/10.1007/s11263-023-01810-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11263-023-01810-0

Keywords