这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

CSFRNet: Integrating Clothing Status Awareness for Long-Term Person Re-identification

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Addressing the dynamic nature of long-term person re-identification (LT-reID) amid varying clothing conditions necessitates a departure from conventional methods. Traditional LT-reID strategies, mainly biometrics-based and data adaptation-based, each have their pitfalls. The former falters in environments lacking high-quality biometric data, while the latter loses efficacy with minimal or subtle clothing changes. To overcome these obstacles, we propose the clothing status-aware feature regularization network (CSFRNet). This novel approach seamlessly incorporates clothing status awareness into the feature learning process, significantly enhancing the adaptability and accuracy of LT-reID systems where clothing can either change completely, partially, or not at all over time, without the need for explicit clothing labels. The versatility of our CSFRNet is showcased on diverse LT-reID benchmarks, including Celeb-reID, Celeb-reID-light, PRCC, DeepChange, and LTCC, marking a significant advancement in the field by addressing the real-world variability of clothing in LT-reID scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Algorithm 1
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Barbosa, I. B., Cristani, M., Del Bue, A., Bazzani, L., & Murino, V. (2012). Re-identification with rgb-d sensors. In ECCV (pp. 433–442).

  • Belongie, S., Malik, J., & Puzicha, J. (2002). Shape matching and object recognition using shape contexts. IEEE TPAMI, 24, 509–522.

    Article  Google Scholar 

  • Chang, X., Hospedales, T. M., & Xiang, T. (2018). Multi-level factorisation net for person re-identification. In CVPR (pp. 2109–2118).

  • Chen, J., Jiang, X., Wang, F., Zhang, J., Zheng, F., Sun, X., & Zheng, W.-S. (2021). Learning 3d shape feature for texture-insensitive person re-identification. In CVPR (pp. 8146–8155).

  • Deng, W., Zheng, L., Ye, Q., Kang, G., Yang, Y., & Jiao, J. (2018). Image-image domain adaptation with preserved self-fimilarity and domain-dissimilarity for person re-identification. In CVPR (pp. 994–1003).

  • Ester, M., Kriegel, H.-P., Sander, J., Xu, X., et al. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD96 (pp. 226–231).

  • Fu, Y., Wei, Y., Wang, G., Zhou, Y., Shi, H., & Huang, T. S. (2019). Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification. In ICCV (pp. 6112–6121).

  • Gu, X., Chang, H., Ma, B., Bai, S., Shan, S., & Chen, X. (2022). Clothes-changing person re-identification with rgb modality only. In CVPR (pp. 1060–1069).

  • Guo, P., Liu, H., Wu, J., Wang, G., & Wang, T. (2023). Semantic-aware consistency network for cloth-changing person re-identification. In ACMMM (pp. 8730–8739).

  • Han, K., Huang, Y., Gong, S., Wang, L., & Tan, T. (2022). 3D shape temporal aggregation for video-based clothing-change person re-identification. In ACCV (pp. 2371–2387).

  • Haque, A., Alahi, A., & Fei-Fei, L. (2016). Recurrent attention models for depth-based person identification. In CVPR (pp. 4512–4519).

  • Hong, P., Wu, T., Wu, A., Han, X., & Zheng, W.-S. (2021). Fine-grained shape-appearance mutual learning for cloth-changing person re-identification. In CVPR (pp. 10513–10522).

  • Huang, Y., Wu, Q., Xu, J., & Zhong, Y. (2019a). Celebrities-reid: A benchmark for clothes variation in long-term person re-identification. In IJCNN (pp. 1–8).

  • Huang, Y., Wu, Q., Xu, J., Zhong, Y., & Zhaoxiang, Z. (2021a). Clothing status awareness for long-term person re-identification. In ICCV (pp. 11895–11904).

  • Huang, Y., Wu, Q., Xu, J., Zhong, Y., & Zhang, Z. (2021b). Unsupervised domain adaptation with background shift mitigating for person re-identification. Springer IJCV, 129, 2244–2263.

  • Huang, Y., Wu, Q., Zhang, Z., Shan, C., Zhong, Y., & Wang, L. (2024b). Meta clothing status calibration for long-term person re-identification. TIP, 33, 2334–2346.

  • Huang, Y., Xu, J., Wu, Q., Zheng, Z., Zhang, Z., & Zhang, J. (2018). Multi-pseudo regularized label for generated data in person re-identification. IEEE TIP, 28, 1391–1403.

    MathSciNet  Google Scholar 

  • Huang, Y., Xu, J., Wu, Q., Zhong, Y., Zhang, P., & Zhang, Z. (2019b). Beyond scalar neuron: Adopting vector-neuron capsules for long-term person re-identification. IEEE TCSVT, 30, 3459–3471.

  • Huang, Y., Zhang, Z., Wu, Q., Zhong, Y., & Wang, L. (2023). Enhancing person re-identification performance through in vivo learning. TIP, 33, 639–654.

    Google Scholar 

  • Huang, Y., Zhang, Z., Wu, Q., Zhong, Y., & Wang, L. (2024a). Attribute-guided pedestrian retrieval: Bridging person re-id with internal attribute variability. In CVPR (pp. 17689–17699).

  • Jaderberg, M., Simonyan, K., Zisserman, A., et al. (2015). Spatial transformer networks. NeurIPS, 2017–2025.

  • Jin, X., He, T., Zheng, K., Yin, Z., Shen, X., Huang, Z., Feng, R., Huang, J., Chen, Z., & Hua, X.-S. (2022). Cloth-changing person re-identification from a single image with gait prediction and regularization. In CVPR (pp. 14278–14287).

  • Li, M., Cheng, S., Xu, P., Zhu, X., Li, C.-G., & Guo, J. (2023a). Unsupervised long-term person re-identification with clothes change. In IEEE international conference on network intelligence and digital content (pp. 167–171). IEEE.

  • Li, M., Xu, P., Li, C.-G., & Guo, J. (2023b). Maskcl: Semantic mask-driven contrastive learning for unsupervised person re-identification with clothes change. arXiv preprint arXiv:2305.13600.

  • Li, W., Zhu, X., & Gong, S. (2018). Harmonious attention network for person re-identification. In CVPR (pp. 2285–2294).

  • Li, Y.-J., Luo, Z., Weng, X., & Kitani, K. M. (2020). Learning shape representations for person re-identification under clothing change. In WACV (pp. 3400–3409).

  • Liao, S., Hu, Y., Zhu, X., & Li, S. Z. (2015). Person re-identification by local maximal occurrence representation and metric learning. In CVPR (pp. 2197–2206).

  • Liu, F., Kim, M., Gu, Z., Jain, A., & Liu, X. (2023). Learning clothing and pose invariant 3d shape representation for long-term person re-identification. In ICCV (pp. 19617–19626).

  • Munaro, M., Basso, A., Fossati, A., Van Gool, L., & Menegatti, E. (2014). 3D reconstruction of freely moving persons for re-identification with a depth sensor. In ICRA (pp. 4512–4519).

  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. (2019). Pytorch: An imperative style, high-performance deep learning library. In NeurIPS (pp. 8024–8035).

  • Qian, X., Fu, Y., Xiang, T., Jiang, Y.-G., & Xue, X. (2019). Leader-based multi-scale attention deep architecture for person re-identification. IEEE TPAMI, 42, 371–385.

    Article  Google Scholar 

  • Qian, X., Wang, W., Zhang, L., Zhu, F., Fu, Y., Xiang, T., Jiang, Y.-G., & Xue, X. (2020). Long-term cloth-changing person re-identification. In ACCV (pp. 71–88).

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. (2015). Imagenet large scale visual recognition challenge. IJCV, 115, 211–252.

    Article  MathSciNet  Google Scholar 

  • Sabour, S., Frosst, N., & Hinton, G. E. (2017). Dynamic routing between capsules. In NeurIPS (pp. 3856–3866).

  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV (pp. 618–626).

  • Suh, Y., Wang, J., Tang, S., Mei, T., & Mu Lee, K. (2018). Part-aligned bilinear representations for person re-identification. In ECCV (pp. 402–419).

  • Sun, Y., Zheng, L., Yang, Y., Tian, Q., & Wang, S. (2018). Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In ECCV (pp. 480–496).

  • Van Der Maaten, L. (2014). Accelerating t-sne using tree-based algorithms. The Journal of Machine Learning Research, 15, 3221–3245.

    MathSciNet  Google Scholar 

  • Wan, F., Wu, Y., Qian, X., Chen, Y., & Fu, Y. (2020) When person re-identification meets changing clothes. In CVPRW (pp. 830–831).

  • Wang, G., Yuan, Y., Chen, X., Li, J., & Zhou, X. (2018). Learning discriminative features with multiple granularities for person re-identification. In ACMMM (pp. 274–282).

  • Wang, X., Han, X., Huang, W., Dong, D., & Scott, M. R. (2019). Multi-similarity loss with general pair weighting for deep metric learning. In CVPR (pp. 5022–5030).

  • Wen, Y., Zhang, K., Li, Z., & Qiao, Y. (2016). A discriminative feature learning approach for deep face recognition. In ECCV (pp. 499–515).

  • Xie, X., Lai, J., & Zheng, W.-S. (2010). Extraction of illumination invariant facial features from a single image using nonsubsampled contourlet transform. Pattern Recognition, 43, 4177–4189.

    Article  Google Scholar 

  • Xu, P., & Zhu, X. (2023). Deepchange: A large long-term person re-identification benchmark with clothes change. In ICCV (pp. 11162–11171).

  • Xu, W., Liu, H., Shi, W., Miao, Z., Lu, Z., & Chen, F. (2021). Adversarial feature disentanglement for long-term person re-identification. In IJCAI (pp. 1201–1207).

  • Yan, Y., Yu, H., Li, S., Lu, Z., He, J., Zhang, H., & Wang, R. (2022). Weakening the influence of clothing: Universal clothing attribute disentanglement for person re-identification. In IJCAI (pp. 1523–1529).

  • Yang, Q., Wu, A., & Zheng, W.-S. (2019). Person re-identification by contour sketch under moderate clothing change. IEEE TPAMI, 43, 2029–2046.

    Article  Google Scholar 

  • Yang, Z., Lin, M., Zhong, X., Wu, Y., & Wang, Z. (2023). Good is bad: Causality inspired cloth-debiasing for cloth-changing person re-identification. In CVPR (pp. 472–1481).

  • Yang, Z., Zhong, X., Liu, H., Zhong, Z., & Wang, Z. (2022). Attentive decoupling network for cloth-changing re-identification. In ICME (pp. 1–6).

  • Ye, M., Shen, J., Lin, G., Xiang, T., Shao, L., & Hoi, S. C. (2021). Deep learning for person re-identification: A survey and outlook. TPAMI, 44, 2872–2893.

    Article  Google Scholar 

  • Yu, S., Li, S., Chen, D., Zhao, R., Yan, J., & Qiao, Y. (2020). Cocas: A large-scale clothes changing person dataset for re-identification. In CVPR (pp. 3400–3409).

  • Zhang, H., Liu, S., Zhang, C., Ren, W., Wang, R., & Cao, X. (2016). Sketchnet: Sketch classification with web images. In CVPR (pp. 1105–1113).

  • Zhang, P., Wu, Q., Xu, J., & Zhang, J. (2018). Long-term person re-identification using true motion from videos. In WACV (pp. 494–502).

  • Zhang, P., Xu, J., Wu, Q., Huang, Y., & Ben, X. (2020a). Learning spatial–temporal representations over walking tracklet for long-term person re-identification in the wild. IEEE TMM, 23, 3562–3576.

  • Zhang, X., Cao, J., Shen, C., & You, M. (2019). Self-training with progressive augmentation for unsupervised cross-domain person re-identification. In ICCV (pp. 8222–8231).

  • Zhang, Z., Lan, C., Zeng, W., Jin, X., & Chen, Z. (2020b). Relation-aware global attention for person re-identification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3186–3195).

  • Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In ICCV (pp. 1116–1124).

  • Zheng, L., Yang, Y., & Hauptmann, A. G. (2016). Person re-identification: Past, present and future. arXiv preprint arXiv:1610.02984.

  • Zheng, W.-S., Gong, S., & Xiang, T. (2011). Person re-identification by probabilistic relative distance comparison. In CVPR (pp. 649–656).

  • Zheng, Z., Jiang, M., Wang, Z., Wang, J., Bai, Z., Zhang, X., Yu, X., Tan, X., Yang, Y., Wen, S., et al. (2020). Going beyond real data: A robust visual representation for vehicle re-identification. In CVPRW (pp. 598–599).

  • Zheng, Z., Yang, X., Yu, Z., Zheng, L., Yang, Y., & Kautz, J. (2019). Joint discriminative and generative learning for person re-identification. In CVPR (pp. 2138–2147).

  • Zheng, Z., Zheng, L., & Yang, Y. (2017a). Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In CVPR (pp. 3754–3762).

  • Zheng, Z., Zheng, L., & Yang, Y. (2017b). A discriminatively learned cnn embedding for person reidentification. ACM TOMM, 14, 1–20.

  • Zhou, K., Yang, Y., Cavallaro, A., & Xiang, T. (2019). Omni-scale feature learning for person re-identification. In ICCV (pp. 3702–3712).

  • Zhu, K., Guo, H., Liu, Z., Tang, M., & Wang, J. (2020). Identity-guided human semantic parsing for person re-identification. In ECCV (pp. 346–363). Springer.

Download references

Acknowledgements

This work was jointly supported by National Natural Science Foundation of China (Grant Nos. 62306311, 62373355, 62236010, 62276261, and 62201061), National Key R and D Program of China (Grant No. 2022ZD0117901), Key Research Program of Frontier Sciences CAS, China (Grant No. ZDBSLYJSC032), and the ARC with Grant No. DP230101540.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Wang.

Additional information

Communicated by Shaogang Gong.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, Y., Huang, Y., Zhang, Z. et al. CSFRNet: Integrating Clothing Status Awareness for Long-Term Person Re-identification. Int J Comput Vis 133, 3180–3202 (2025). https://doi.org/10.1007/s11263-024-02315-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11263-024-02315-0

Keywords