这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

Adversarial Reweighting with \(\alpha \)-Power Maximization for Domain Adaptation

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The practical Domain Adaptation (DA) tasks, e.g., Partial DA (PDA), open-set DA, universal DA, and test-time adaptation, have gained increasing attention in the machine learning community. In this paper, we propose a novel approach, dubbed Adversarial Reweighting with \(\alpha \)-Power Maximization (ARPM), for PDA where the source domain contains private classes absent in target domain. In ARPM, we propose a novel adversarial reweighting model that adversarially learns to reweight source domain data to identify source-private class samples by assigning smaller weights to them, for mitigating potential negative transfer. Based on the adversarial reweighting, we train the transferable recognition model on the reweighted source distribution to be able to classify common class data. To reduce the prediction uncertainty of the recognition model on the target domain for PDA, we present an \(\alpha \)-power maximization mechanism in ARPM, which enriches the family of losses for reducing the prediction uncertainty for PDA. Extensive experimental results on five PDA benchmarks, e.g., Office-31, Office-Home, VisDA-2017, ImageNet-Caltech, and DomainNet, show that our method is superior to recent PDA methods. Ablation studies also confirm the effectiveness of components in our approach. To theoretically analyze our method, we deduce an upper bound of target domain expected error for PDA, which is approximately minimized in our approach. We further extend ARPM to open-set DA, universal DA, and test time adaptation, and verify the usefulness through experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Algorithm 1
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data Availibility Statement

The data that support the findings of this study are available from the authors upon request.

Notes

  1. In this paper, by “prediction uncertainty”, we refer to the uncertainty of the classification probability distribution (classification score) outputted by the recognition model, e.g., the uniform distribution has larger uncertainty while the one-hot distribution has smaller uncertainty.

  2. Following Yang et al. (2021b), we use the cosine distance to find the neighbors.

  3. As in Yang et al. (2021b), we set \(K=M=5\) for VisDA-2017 dataset and \(K=4\), \(M=3\) for the other datasets in experiments.

  4. We set the norm as in Gu et al. (2020).

  5. On VisDA-2017 dataset, we do not normalize the weight of C. We empirically find that on VisDA dataset, the unnormalized weight of C yields better result.

References

  • Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein generative adversarial networks. In ICML.

  • Baktashmotlagh, M., Faraki, M., Drummond, T., & Salzmann, M. (2019). Learning factorized representations for open-set domain adaptation. In ICLR.

  • Balgi, S., & Dukkipati, A. (2022). Contradistinguisher: A vapnik’s imperative to unsupervised domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), 4730–4747.

    Google Scholar 

  • Brock, A., Donahue, J., & Simonyan, K. (2019). Large scale GAN training for high fidelity natural image synthesis. In ICLR.

  • Bucci, S., Loghmani, M. R., & Tommasi, T. (2020). On the effectiveness of image rotation for open set domain adaptation. In ECCV.

  • Cai, T., Gao, R., Lee, J., & Lei, Q. (2021). A theory of label propagation for subpopulation shift. In ICML.

  • Cao, Z., Long, M., Wang, J., & Jordan, M. I. (2018). Partial transfer learning with selective adversarial networks. In CVPR.

  • Cao, Z., Ma, L., Long, M., & Wang, J. (2018). Partial adversarial domain adaptation. In ECCV.

  • Cao, Z., You, K., Long, M., Wang, J., & Yang, Q. (2019). Learning to transfer examples for partial domain adaptation. In CVPR.

  • Cao, Z., You, K., Zhang, Z., Wang, J., & Long, M. (2023). From big to small: Adaptive learning to partial-set domains. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2), 1766–1780.

    Article  Google Scholar 

  • Chang, W., Shi, Y., Tuan, H., & Wang, J. (2022). Unified optimal transport framework for universal domain adaptation. In NeurIPS.

  • Chen, L., Lou, Y., He, J., Bai, T., & Deng, M. (2022). Geometric anchor correspondence mining with uncertainty modeling for universal domain adaptation. In CVPR.

  • Chen, M., Xue, H., & Cai, D. (2019). Domain adaptation for semantic segmentation with maximum squares loss. In ICCV.

  • Cui, S., Wang, S., Zhuo, J., Li, L., Huang, Q., & Tian, Q. (2020). Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In CVPR.

  • Diamond, S., & Boyd, S. (2016). Cvxpy: A python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(1), 2909–2913.

    MathSciNet  Google Scholar 

  • Du, Z., Li, J., Su, H., Zhu, L., & Lu, K. (2021). Cross-domain gradient discrepancy minimization for unsupervised domain adaptation. In CVPR.

  • Feng, Q., Kang, G., Fan, H., & Yang, Y. (2019). Attract or distract: Exploit the margin of open set. In ICCV.

  • Fu, B., Cao, Z., Long, M., & Wang, J. (2020). Learning to detect open classes for universal domain adaptation. In ECCV.

  • Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In ICML.

  • Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., & Lempitsky, V. (2016). Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1), 2030–2096.

    MathSciNet  Google Scholar 

  • Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In AISTATS.

  • Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial networks. arXiv preprint arXiv:1406.2661

  • Grandvalet, Y., & Bengio, Y. (2005). Semi-supervised learning by entropy minimization. In NeurIPS.

  • Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., & Smola, A. (2006). A kernel method for the two-sample-problem. In NeurIPS.

  • Griffin, G., Holub, A., & Perona, P. (2007). Caltech-256 object category dataset. Pasadena: California Institute of Technology.

    Google Scholar 

  • Gu, X., Sun, J., & Xu, Z. (2020). Spherical space domain adaptation with robust pseudo-label loss. In CVPR.

  • Gu, X., Yu, X., Yang, Y., Sun, J., & Xu, Z. (2021). Adversarial reweighting for partial domain adaptation. In NeurIPS.

  • Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In NeurIPS.

  • Guo, P., Zhu, J., & Zhang, Y. (2022). Selective partial domain adaptation. In BMVC.

  • Gu, X., Sun, J., & Xu, Z. (2022). Unsupervised and semi-supervised robust spherical space domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2022.3158637

    Article  Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR.

  • He, C., Li, X., Xia, Y., Tang, J., Yang, J., & Ye, Z. (2023). Addressing the overfitting in partial domain adaptation with self-training and contrastive learning. IEEE Transactions on Circuits and Systems for Video Technology. https://doi.org/10.1109/TCSVT.2023.3296617

    Article  Google Scholar 

  • Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., Song, D., Steinhardt, J., & Gilmer, J. (2021). The many faces of robustness: A critical analysis of out-of-distribution generalization. In ICCV.

  • Hoffman, J., Tzeng, E., Park, T., Zhu, J. -Y., Isola, P., Saenko, K., Efros, A., & Darrell, T. (2018). Cycada: Cycle-consistent adversarial domain adaptation. In ICML.

  • Jing, M., Li, J., Zhu, L., Ding, Z., Lu, K., & Yang, Y. (2021). Balanced open set domain adaptation via centroid alignment. In AAAI.

  • Jing, T., Liu, H., & Ding, Z. (2021). Towards novel target discovery through open-set domain adaptation. In ICCV.

  • Kang, G., Jiang, L., Wei, Y., Yang, Y., & Hauptmann, A. (2022). Contrastive adaptation network for single- and multi-source domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4), 1793–1804.

    Article  Google Scholar 

  • Koniusz, P., Tas, Y., & Porikli, F. (2017). Domain adaptation by mixture of alignments of second-or higher-order scatter tensors. In CVPR.

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In NeurIPS.

  • Kundu, J. N., Bhambri, S., Kulkarni, A. R., Sarkar, H., Jampani, V., & Radhakrishnan, V. B. (2022). Subsidiary prototype alignment for universal domain adaptation. In NeurIPS.

  • Kundu, J. N., Venkat, N., Babu, R. V. (2020). Universal source-free domain adaptation. In CVPR.

  • Kundu, J. N., Venkat, N., Revanur, A., Babu, R. V. et al. (2020). Towards inheritable models for open-set domain adaptation. In CVPR.

  • Li, G., Kang, G., Zhu, Y., Wei, Y., & Yang, Y. (2021). Domain consensus clustering for universal domain adaptation. In CVPR.

  • Li, W., Liu, J., Han, B., & Yuan, Y. (2023). Adjustment and alignment for unbiased open set domain adaptation. In CVPR.

  • Liang, J., Hu, D., & Feng, J. (2020). Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In ICML.

  • Liang, J., Hu, D., Feng, J., & He, R. (2021). Umad: Universal model adaptation under domain and category shift. arXiv preprint arXiv:2112.08553

  • Liang, J., Wang, Y., Hu, D., He, R., & Feng, J. (2020) A balanced and uncertainty-aware approach for partial domain adaptation. In ECCV.

  • Liang, J., Hu, D., Wang, Y., He, R., & Feng, J. (2021). Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3103390

    Article  Google Scholar 

  • Li, W., & Chen, S. (2022). Unsupervised domain adaptation with progressive adaptation of subspaces. Pattern Recognition, 132, 108918.

    Article  Google Scholar 

  • Li, W., & Chen, S. (2023). Partial domain adaptation without domain alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7), 8787–8797.

    Article  Google Scholar 

  • Li, S., Liu, C. H., Lin, Q., Wen, Q., Su, L., Huang, G., & Ding, Z. (2020). Deep residual correction network for partial domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(7), 2329–2344.

    Article  Google Scholar 

  • Lin, K. -Y., Zhou, J., Qiu, Y., & Zheng, W. -S. (2022). Adversarial partial domain adaptation by cycle inconsistency. In ECCV.

  • Liu, H., Cao, Z., Long, M., Wang, J., & Yang, Q. (2019). Separate to adapt: Open set domain adaptation via progressive separation. In CVPR.

  • Liu, X., Guo, Z., Li, S., Xing, F., You, J., Kuo, C. -C. J., El Fakhri, G., & Woo, J. (2021). Adversarial unsupervised domain adaptation with conditional and label shift: Infer, align and iterate. In ICCV.

  • Liu, H., Wang, J., & Long, M. (2021). Cycle self-training for domain adaptation. Advances in Neural Information Processing Systems, 34, 22968–22981.

    Google Scholar 

  • Li, H., Wan, R., Wang, S., & Kot, A. C. (2021). Unsupervised domain adaptation in the wild via disentangling representation learning. International Journal of Computer Vision, 129, 267–283.

    Article  MathSciNet  Google Scholar 

  • Long, M., Cao, Y., Wang, J., & Jordan, M. (2015). Learning transferable features with deep adaptation networks. In ICML.

  • Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2018). Conditional adversarial domain adaptation. In NeurIPS.

  • Luo, Y.-W., & Ren, C.-X. (2023). Mot: Masked optimal transport for partial domain adaptation. In: CPVR.

  • Luo, Y., Wang, Z., Huang, Z., & Baktashmotlagh, M. (2020). Progressive graph learning for open-set domain adaptation. In ICML.

  • Maaten, L., & Hinton, G. (2008). Visualizing data using t-sne. JMLR Journal of Machine Learning Research, 9(86), 2579–2605.

    Google Scholar 

  • Mirza, M. J., Micorek, J., Possegger, H., & Bischof, H. (2022). The norm must go on: Dynamic unsupervised domain adaptation by normalization. In CVPR.

  • Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. In ICLR.

  • Pan, Y., Yao, T., Li, Y., Ngo, C.-W., & Mei, T. (2020). Exploring category-agnostic clusters for open-set domain adaptation. In CVPR.

  • Panareda Busto, P., & Gall, J. (2017). Open set domain adaptation. In ICCV.

  • Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE TKDE, 22(10), 1345–1359.

    Google Scholar 

  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., & Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. In NeurIPS.

  • Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., & Wang, B. (2019). Moment matching for multi-source domain adaptation. In ICCV.

  • Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., & Saenko, K. (2017). Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924

  • Qu, S., Zou, T., Röhrbein, F., Lu, C., Chen, G., Tao, D., & Jiang, C. (2023). Upcycling models under domain and category shift. In CVPR.

  • Qu, S., Zou, T., Röhrbein, F., Lu, C., Chen, G., Tao, D., & Jiang, C. (2023). Upcycling models under domain and category shift. In CVPR.

  • Rakshit, S., Tamboli, D., Meshram, P.S., Banerjee, B., Roig, G., & Chaudhuri, S. (2020). Multi-source open-set deep adversarial domain adaptation. In ECCV.

  • Reddi, S., Ramdas, A., Póczos, B., Singh, A., & Wasserman, L. (2015). On the high dimensional power of a linear-time two sample test under mean-shift alternatives. In AISTATS.

  • Ren, C.-X., Ge, P., Yang, P., & Yan, S. (2020). Learning target-domain-specific classifier for partial domain adaptation. IEEE Transactions on Neural Networks and Learning Systems, 32(5), 1989–2001.

    Article  MathSciNet  Google Scholar 

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.

    Article  MathSciNet  Google Scholar 

  • Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In ECCV.

  • Sahoo, A., Panda, R., Feris, R., Saenko, K., & Das, A. (2023). Select, label, and mix: Learning discriminative invariant feature representations for partial domain adaptation. In WACV.

  • Saito, K., & Saenko, K. (2021). Ovanet: One-vs-all network for universal domain adaptation. In ICCV.

  • Saito, K., Kim, D., Sclaroff, S., & Saenko, K. (2020). Universal domain adaptation through self supervision. In NeurIPS.

  • Saito, K., Kim, D., Sclaroff, S., Darrell, T., & Saenko, K. (2019). Semi-supervised domain adaptation via minimax entropy. In ICCV.

  • Saito, K., Yamamoto, S., Ushiku, Y., & Harada, T. (2018). Open set domain adaptation by backpropagation. In ECCV.

  • Schneider, S., Rusak, E., Eck, L., Bringmann, O., Brendel, W., & Bethge, M. (2020). Improving robustness against common corruptions by covariate shift adaptation. In NeurIPS.

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  • Sun, B., & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In ECCV.

  • Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A., & Hardt, M. (2020). Test-time training with self-supervision for generalization under distribution shifts. In ICML.

  • Tang, S., Chang, A., Zhang, F., Zhu, X., Ye, M., & Zhang, C. (2023). Source-free domain adaptation via target prediction distribution searching. International Journal of Computer Vision, 132(3), 654–672.

    Article  Google Scholar 

  • Tzeng, E., Hoffman, J., Darrell, T., & Saenko, K. (2015). Simultaneous deep transfer across domains and tasks. In ICCV.

  • Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In CVPR.

  • Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., & Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. arXiv preprint. arXiv:1412.3474

  • Venkateswara, H., Eusebio, J., Chakraborty, S., & Panchanathan, S. (2017). Deep hashing network for unsupervised domain adaptation. In CVPR.

  • Wang, X., Li, L., Ye, W., Long, M., & Wang, J. (2019). Transferable attention for domain adaptation. In AAAI.

  • Wang, D., Shelhamer, E., Liu, S., Olshausen, B., & Darrell, T. (2021). Tent: Fully test-time adaptation by entropy minimization. In ICLR.

  • Wang, Y., Zhang, L., Song, R., Li, H., Rosin, P. L., & Zhang, W. (2023). Exploiting inter-sample affinity for knowability-aware universal domain adaptation. International Journal of Computer Vision, 132(5), 1800–1816.

    Article  Google Scholar 

  • Wei, C., Shen, K., Chen, Y., & Ma, T. (2021). Theoretical analysis of self-training with deep networks on unlabeled data. In ICLR.

  • Wu, K., Wu, M., Chen, Z., Jin, R., Cui, W., Cao, Z., & Li, X. (2023). Reinforced adaptation network for partial domain adaptation. IEEE Transactions on Circuits and Systems for Video Technology, 33(5), 2370–2380.

    Article  Google Scholar 

  • Xiao, W., Ding, Z., & Liu, H. (2021). Implicit semantic response alignment for partial domain adaptation. In NeurIPS.

  • Xu, T., Chen, W., WANG, P., Wang, F., Li, H., & Jin, R. (2022). CDTrans: Cross-domain transformer for unsupervised domain adaptation. In ICLR.

  • Yan, J., Jing, Z., & Leung, H. (2020). Discriminative partial domain adversarial network. In ECCV.

  • Yang, Y., Gu, X., & Sun, J. (2023). Prototypical partial optimal transport for universal domain adaptation. In AAAI.

  • Yang, S., Wang, Y., Van De Weijer, J., Herranz, L., & Jui, S. (2021). Generalized source-free domain adaptation. In ICCV.

  • Yang, S., Wang, Y., weijer, J., Herranz, L., & Jui, S. (2021). Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In NeurIPS.

  • Yang, C., Cheung, Y.-M., Ding, J., Tan, K. C., Xue, B., & Zhang, M. (2023). Contrastive learning assisted-alignment for partial domain adaptation. IEEE Transactions on Neural Networks and Learning Systems, 34(10), 7621–7634.

    Article  Google Scholar 

  • You, K., Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2019). Universal domain adaptation. In CVPR.

  • Zellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., & Saminger-Platz, S. (2017). Central moment discrepancy (CMD) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811

  • Zhang, J., Ding, Z., Li, W., & Ogunbona, P. (2018). Importance weighted adversarial nets for partial domain adaptation. In CVPR.

  • Zhong, Z., Zheng, L., Cao, D., & Li, S. (2017). Re-ranking person re-identification with k-reciprocal encoding. In CVPR.

Download references

Acknowledgements

The work was supported by National Key R &D Program 2021YFA1003002, Key-Area Research and Development Program of Guangdong Province 2022B0303020003, NSFC (12125104, U20B2075, 12326615, 623B2084), Postdoctoral Fellowship Program of CPSF GZB20230582, and Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences.

Funding

National Key R &D Program 2021YFA1003002, Key-Area Research and Development Program of Guangdong Province 2022B0303020003, NSFC (12125104, U20B2075, 12326615, 623B2084), Postdoctoral Fellowship Program of CPSF GZB20230582, and Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences.

Author information

Authors and Affiliations

Corresponding author

Correspondence to Jian Sun.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Communicated by ZHUN ZHONG.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

We first give some lemmas and then provide the proof of Theorem 1.

Lemma 1

Divide \(\mathcal {Y}_\textrm{com}\) into \(S_1\) and \(S_2\) such that \(S_1 = \{i\in \mathcal {Y}_\textrm{com}:\mathbb {E}_{(\textbf{x},y)\sim \frac{P^\textrm{c}_i+Q_i}{2}}{\mathbb {I}}(\exists \textbf{x}'\in {\mathcal {N}}(\textbf{x}), {\tilde{f}}(\textbf{x})\ne {\tilde{f}}(\textbf{x}'))<\min \{\epsilon ,q\}\}\) and \(S_2 = \{i\in \mathcal {Y}_\textrm{com}:\mathbb {E}_{(\textbf{x},y)\sim \frac{P^\textrm{c}_i+Q_i}{2}}{\mathbb {I}}(\exists \textbf{x}'\in {\mathcal {N}}(\textbf{x}),{\tilde{f}}(\textbf{x}) \ne {\tilde{f}}(\textbf{x}'))\ge \min \{\epsilon ,q\}\}\). Under the condition of Theorem 1, we have

$$\begin{aligned} \sum _{i\in S_2} \frac{P^\textrm{c}+Q}{2}(y=i) \le \frac{R_{\frac{P^\textrm{c}+Q}{2}}(f)}{\min \{\epsilon ,q\}}. \end{aligned}$$
(16)

Proof

Suppose \(\sum _{i\in S_2} \frac{P^\textrm{c}+Q}{2}(y=i) > \frac{R(f)}{\min \{\epsilon ,q\}}\), which implies

$$\begin{aligned} \begin{aligned}&R_{\frac{P^\textrm{c}+Q}{2}}(f) \\&\quad = \frac{P^\textrm{c}+Q}{2}(\{\textbf{x},\exists \textbf{x}'\in {\mathcal {N}}(\textbf{x}),{\tilde{f}}(\textbf{x})\ne {\tilde{f}}(\textbf{x}')\})\\&\quad =\mathbb {E}_{(\textbf{x},y)\sim \frac{P^\textrm{c}+Q}{2}}{\mathbb {I}}(\exists \textbf{x}'\in {\mathcal {N}}(\textbf{x}),{\tilde{f}}(\textbf{x})\ne {\tilde{f}}(\textbf{x}'))\\&\quad = \sum _{i\in \mathcal {Y}_\textrm{com}}\Big \{\mathbb {E}_{\textbf{x}\sim \frac{P^\textrm{c}_i+Q_i}{2}}{\mathbb {I}}(\exists \textbf{x}'\in {\mathcal {N}}(\textbf{x}), {\tilde{f}}(\textbf{x})\ne \\&\qquad {\tilde{f}}(\textbf{x}'))\frac{P^\textrm{c}+Q}{2}(y=i)\Big \}\\&\quad \ge \sum _{i\in S_2}\Big \{\mathbb {E}_{\textbf{x}\sim \frac{P^\textrm{c}_i+Q_i}{2}}{\mathbb {I}}(\exists \textbf{x}'\in {\mathcal {N}}(\textbf{x}),{\tilde{f}}(\textbf{x})\ne \\&\qquad {\tilde{f}}(\textbf{x}'))\frac{P^\textrm{c}+Q}{2}(y=i)\Big \}\\&\quad \ge \min \{\epsilon ,q\} \sum _{i\in S_2} \frac{P^\textrm{c}+Q}{2}(y=i) > R_{\frac{P^\textrm{c}+Q}{2}}(f). \end{aligned} \end{aligned}$$
(17)

\(R_{\frac{P^\textrm{c}+Q}{2}}(f)>R_{\frac{P^\textrm{c}+Q}{2}}(f)\) forms a contradiction. \(\square \)

Lemma 2

(Lemma 2 in Liu et al. (2021)) Under the condition of Theorem 1, if sub-populations \(P^\textrm{c}_i\) and \(Q_i\) satisfy \(\mathbb {E}_{(\textbf{x},y)\sim \frac{P^\textrm{c}_i+Q_i}{2}}{\mathbb {I}}(\exists \textbf{x}'\in {\mathcal {N}}(\textbf{x}),{\tilde{f}}(\textbf{x}) \ne {\tilde{f}}(\textbf{x}'))<\min \{\epsilon ,q\}\), we have

$$\begin{aligned} |\varepsilon _{P^\textrm{c}_i}(f)-\varepsilon _{Q_i}(f)|\le 2q. \end{aligned}$$
(18)

Lemma 3

(Lemma 3 in Liu et al. (2021)) For any distribution P, if f is L-Lipschiz w.r.t. \(d(\cdot ,\cdot )\), we have

$$\begin{aligned} R_P(f) \le \frac{1}{(1-2L\xi )}(1-{M}_P(f)). \end{aligned}$$
(19)

Proof of Theorem 1

From the definition of \(\varepsilon _Q(f)\) in PDA, we have

$$\begin{aligned} \varepsilon _Q(f)= & {} \sum _{i\in \mathcal {Y}_\textrm{com}}\varepsilon _{Q_i}(f)Q(y=i) \nonumber \\\le & {} \sum _{i\in S_1}\varepsilon _{Q_i}(f)Q(y=i) + \sum _{i\in S_2} Q(y=i)\nonumber \\\le & {} \sum _{i\in S_1}(\varepsilon _{P_i}(f)+2q)rP^\textrm{c}(y=i) \nonumber \\{} & {} + \sum _{i\in S_2} Q(y=i)\nonumber \\\le & {} \sum _{i\in \mathcal {Y}_\textrm{com}}(\varepsilon _{P_i}(f)+2q)rP^\textrm{c}(y=i) \nonumber \\{} & {} + \sum _{i\in S_2} Q(y=i)\nonumber \\= & {} r\varepsilon _{P^\textrm{c}}(f) + 2qr + \sum _{i\in S_2} Q(y=i). \end{aligned}$$
(20)

The second inequality uses Lemma 2 and \(\frac{Q(y=i)}{P^\textrm{c}(y=i)}\le r\) for \(i\in \mathcal {Y}_\textrm{com}\). Since

$$\begin{aligned} \begin{aligned} \frac{P^\textrm{c}+Q}{2}(y=i)&= \frac{1}{2}(P^\textrm{c}(y=i)+Q(y=i))\\&\ge \frac{1}{2}(\frac{1}{r}Q(y=i)+Q(y=i)) \\&= \frac{1+r}{2r}Q(y=i), \end{aligned} \end{aligned}$$
(21)

we have

$$\begin{aligned} \sum _{i\in S_2} Q(y=i) \le \frac{2r}{1+r}\sum _{i\in S_2} \frac{P^\textrm{c}+Q}{2}(y=i). \end{aligned}$$
(22)

Using Lemma 1, we have

$$\begin{aligned} \begin{aligned} \sum _{i\in S_2} Q(y=i) \le \frac{2r}{\min \{\epsilon ,q\}(1+r)}R_{\frac{P^\textrm{c}+Q}{2}}(f). \end{aligned} \end{aligned}$$
(23)

Applying Lemma 3, for any \(\eta \in [0,1]\), we have

$$\begin{aligned} \begin{aligned}&\sum _{i\in S_2} Q(y=i) \\&\quad \le \frac{2r\eta }{\min \{\epsilon ,q\}(1+r)}R_{\frac{P^\textrm{c}+Q}{2}}(f)\\&\qquad + \frac{2r(1-\eta )}{\min \{\epsilon ,q\}(1+r)(1-2L\xi )}(1-{\mathcal {M}}_{\frac{P^\textrm{c}+Q}{2}}(f)). \end{aligned} \end{aligned}$$
(24)

Combining Eqs. (20) and (24), we have

$$\begin{aligned} \begin{aligned} \varepsilon _Q(f)&\le r\varepsilon _{P^\textrm{c}}(f) + c_1 R_{\frac{P^\textrm{c}+Q}{2}}(f)\\&\quad +c_2(1-{M}_{\frac{P^\textrm{c}+Q}{2}}(f)) + 2rq, \end{aligned} \end{aligned}$$
(25)

where the coeffcients \(c_1 = \frac{2\eta r}{\min \{\epsilon ,q\}(1+r)}\) and \(c_2 = \frac{2r(1-\eta ) }{\min \{\epsilon ,q\}(1-2\,L\xi )(1+r)}\) are constants to f. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gu, X., Yu, X., Yang, Y. et al. Adversarial Reweighting with \(\alpha \)-Power Maximization for Domain Adaptation. Int J Comput Vis 132, 4768–4791 (2024). https://doi.org/10.1007/s11263-024-02107-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11263-024-02107-6

Keywords