这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

Poincaré Kernels for Hyperbolic Representations

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Embedding data in hyperbolic spaces has proven beneficial for many advanced machine learning applications. However, working in hyperbolic spaces is not without difficulties as a result of its curved geometry (e.g., computing the Fréchet mean of a set of points requires an iterative algorithm). In Euclidean spaces, one can resort to kernel machines that not only enjoy rich theoretical properties but that can also lead to superior representational power (e.g., infinite-width neural networks). In this paper, we introduce valid kernel functions for hyperbolic representations. This brings in two major advantages, 1. kernelization will pave the way to seamlessly benefit the representational power from kernel machines in conjunction with hyperbolic embeddings, and 2. the rich structure of the Hilbert spaces associated with kernel machines enables us to simplify various operations involving hyperbolic data. That said, identifying valid kernel functions on curved spaces is not straightforward and is indeed considered an open problem in the learning community. Our work addresses this gap and develops several positive definite kernels in hyperbolic spaces (modeled by a Poincaré ball), the proposed kernels include the rich universal ones (e.g., Poincaré RBF kernel), or realize the multiple kernel learning scheme (e.g., Poincaré radial kernel). We comprehensively study the proposed kernels on a variety of challenging tasks including few-shot learning, zero-shot learning, person re-identification, deep metric learning, knowledge distillation and self-supervised learning. The consistent performance gain over different tasks shows the benefits of the kernelization for hyperbolic representations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. In practice, such hierarchical structure can be revealed by the geodesic distance of two points.

  2. The geodesic is the shortest path between two points. Its length is termed geodesic distance. For example, the geodesic of the Euclidean space is a straight line connecting two points, and it becomes the well-known Euclidean distance. On the contrary, the geodesic of the n-sphere is the curve along the sphere, such that the geodesic distance is the length of the curve.

  3. If a manifold \(\mathcal {M}\) is isometric to some Euclidean spaces \(\mathbb {R}^n\), then the geodesic distance on \(\mathcal {M}\) is the Euclidean distance in \(\mathbb {R}^n\). However, it is impossible to find an isometry between \(\mathbb {D}^n_c\) and \(\mathbb {R}^n\) because of the difference in the curvature of two geometries.

  4. Noted the function \(\varGamma _{{\varvec{0}}}(\cdot )\) realizes the mapping to project the points in the identity tangent space (i.e., \(T_{{\varvec{0}}}\mathbb {D}^n_c\)) into a Poincaré ball (i.e., \(\mathbb {D}^n_c\)), which is defined as \(\varGamma _{{\varvec{0}}}({\varvec{x}}) = \tanh (\sqrt{c}\Vert {\varvec{x}}\Vert ) \frac{{\varvec{x}}}{\sqrt{c}\Vert {\varvec{x}}\Vert }\) for \({\varvec{x}} \in T_{{\varvec{0}}}\mathbb {D}^n_c\).

  5. The Friedman test is a non-parametric measure for multiple datasets. It ranks the algorithms for each dataset separately and calculates the average ranks for each dataset as a ranking score.

  6. Following SimCLR, the projection head is a 2-layer MLP with \(\textrm{ReLU}\) activation (i.e., \(2048 \rightarrow 2048 \rightarrow \textrm{ReLU} \rightarrow 128\)).

References

  • Absil, P. A., Mahony, R., & Sepulchre, R. (2007). Optimization algorithms on matrix manifolds. Princeton University Press.

  • Akata, Z., Perronnin, F., Harchaoui, Z., & Schmid, C. (2015). Label-embedding for image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5, 1425–1438.

    Google Scholar 

  • Akata, Z., Reed, S., Walter, D., Lee, H., & Schiele, B. (2015). Evaluation of output embeddings for fine-grained image classification. In IEEE computer vision and pattern recognition (pp. 2927–2936).

  • Ba, J., & Caruana, R. (2014). Do deep nets really need to be deep? In Advances in neural information processing systems (pp. 2654–2662).

  • Berg, C., Christensen, J. P. R., & Ressel, P. (1984). Harmonic analysis on semigroups. Springer.

  • Chen, J., Qin, J., Shen, Y., Liu, L., Zhu, F., & Shao, L. (2020). Learning attentive and hierarchical representations for 3D shape recognition. In European conference on computer vision (pp. 105–122).

  • Chen, L., Zhang, H., Xiao, J., Liu, W., & Chang, S. F. (2018). Zero-shot visual recognition using semantics-preserving adversarial embedding networks. In IEEE/CVF conference on computer vision and pattern recognition (pp. 1043–1052).

  • Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In The 36th international conference on machine learning (pp. 1597–1607).

  • Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y. C., & Huang, J. B. (2019). A closer look at few-shot classification. In International conference on learning representations (pp. 1–11).

  • Chen, X., & He, K. (2021). Exploring simple siamese representation learning. In IEEE/CVF conference on computer vision and pattern recognition (pp. 15750–15758).

  • Cho, H., DeMeo, B., Peng, J., & Berger, B. (2019). Large-margin classification in hyperbolic space. In The 36th international conference on machine learning (pp. 1832–1840).

  • Cho, J. H., & Hariharan, B. (2019). On the efficacy of knowledge distillation. In IEEE/CVF international conference on computer vision (pp. 1–11).

  • Christmann, A., & Steinwart, I. (2008). Support vector machines. Springer.

  • Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 5, 1–30.

    MathSciNet  MATH  Google Scholar 

  • Deng, J., Dong, W., Li, R. S. L. J., Li, K., & Li, F. F. (2009). Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (pp. 248–255).

  • Domingos, P. (2020). Every model learned by gradient descent is approximately a kernel machine. arXiv:2012.00152.

  • Fang, P., Harandi, M., & Petersson, L. (2021). Kernel methods in hyperbolic spaces. In IEEE/CVF international conference on computer vision (pp. 10665–10674).

  • Fang, P., Ji, P., Petersson, L., & Harandi, M. (2021). Set augmented triplet loss for video person re-identification. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (WACV) (pp. 464–473).

  • Fang, P., Zhou, J., Roy, S. K., Ji, P., Petersson, L., & Harandi, M. (2021). Attention in attention networks for person retrieval. In IEEE transactions on pattern analysis and machine intelligence (pp. 4626–4641).

  • Fang, P., Zhou, J., Roy, S.K., Petersson, L., & Harandi, M. (2019). Bilinear attention networks for person retrieval. In IEEE/CVF international conference on computer vision (pp. 8030–8039).

  • Feragen, A., & Hauberg, S. (2016). Open problem: Kernel methods on manifolds and metric spaces. What is the probability of a positive definite geodesic exponential kernel? In Conference on learning theory (pp. 1647–1650).

  • Feragen, A., Lauze, F., & Hauberg, S. (2015). Geodesic exponential kernels: When curvature and linearity conflict. In IEEE conference on computer vision and pattern recognition (pp. 3032–3042).

  • Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In The 34th international conference on machine learning (pp. 1126–1135).

  • Frome, A., Corrado, G. S., Shlens, J., Bengio, S., Dean, J., Ranzato, M. A., & Mikolov, T. (2013). Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems (pp. 2121–2129).

  • Ganea, O. E., Bécigneul, G., & Hofmann, T. (2018). Hyperbolic neural networks. In Advances in neural information processing systems (pp. 5345–5355).

  • Gretton, A., Borgwardt, K. M., Rasch, M. J., Scholköpf, B., & Smola, A. (2012). A kernel two-sample test. Journal of Machine Learning Research, 5, 723–773.

    MathSciNet  MATH  Google Scholar 

  • Gu, A., Sala, F., Gunel, B., & Ré, C. (2019). Learning mixed-curvature representations in product spaces. In International conference on learning representations (pp. 1–11).

  • Gulcehre, C., Denil, M., Malinowski, M., Razavi, A., Pascanu, R., Hermann, K. M., Battaglia, P., Bapst, V., Raposo, D., Santoro, A., & de Freitas, N. (2019). Hyperbolic attention networks. In International conference on learning representations (pp. 1–11).

  • Hamann, M. (2011) On the tree-likeness of hyperbolic spaces. arXiv:1105.3925.

  • Hao, Y., Wang, N., Li, J., & Gao, X. (2019). Hsme: Hypersphere manifold embedding for visible thermal person re-identification. In The 33rd AAAI conference on artificial intelligence (pp. 8385–8392).

  • Harandi, M. T., Salzmann, M., Jayasumana, S., Hartley, R., & Li, H. (2014). Expanding the family of grassmannian kernels: An embedding perspective. In European conference on computer vision (pp. 408–423).

  • He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In IEEE/CVF conference on computer vision and pattern recognition (pp. 9729–9738).

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (pp. 770—778).

  • Hinton, G., Vinyals, O., & Dean, J. (2014). Distilling the knowledge in a neural network. In Advances in neural information processing systems deep learning workshop.

  • Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., & Bengio, Y. (2019). Learning deep representations by mutual information estimation and maximization. In The international conference on learning representations (pp. 1–14).

  • Hofmann, T., Scholkopf, B., & Smola, A. J. (2008). Kernel methods in machine learning. The Annals of Statistics, 5, 1171–1220.

    MathSciNet  MATH  Google Scholar 

  • Hong, J., Fang, P., Li, W., Zhang, T., Simon, C., Harandi, M., & Petersson, L. (2021). Reinforced attention for few-shot learning and beyond. In IEEE/CVF conference on computer vision and pattern recognition (pp. 913–923).

  • Ioffe, S., & Szegedy, C. (2015). Batch Normalization: Accelerating deep network training by reducing internal covariate shift. In The 32nd international conference on machine learning (pp. 448–456).

  • Jacot, A., Gabriel, F., & Hongler, C. (2018). Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems (pp. 8571–8580).

  • Jayasumana, S., Hartley, R., Salzmann, M., Li, H., & Harandi, M. (2013). Kernel methods on the riemannian manifold of symmetric positive definite matrices. In IEEE conference on computer vision and pattern recognition (pp. 73–80).

  • Jayasumana, S., Hartley, R., Salzmann, M., Li, H., & Harandi, M. (2014). Optimizing over radial kernels on compact manifolds. In IEEE conference on computer vision and pattern recognition (pp. 3802–3809).

  • Jayasumana, S., Hartley, R., Salzmann, M., Li, H., & Harandi, M. (2015). Kernel methods on Riemannian manifolds with gaussian RBF kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5, 2464–2477.

    Article  Google Scholar 

  • Jayasumana, S., Ramalingam, S., & Kumar, S. (2021). Kernelized classification in deep networks. arXiv:2012.09607v2.

  • Karcher, H. (1977). Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics, 6, 509–541.

    Article  MathSciNet  MATH  Google Scholar 

  • Khrulkov, V., Mirvakhabova, L., Ustinova, E., Oseledets, I., & Lempitsky, V. (2020). Hyperbolic image embeddings. In IEEE/CVF conference on computer vision and pattern recognition (pp. 6418–6428).

  • Krause, J., Stark, M., Deng, J., & Fei-Fei, L. (2013). 3D object representations for fine-grained categorization. In IEEE international conference on computer vision workshops (pp. 554–561).

  • Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. In Technical report.

  • Lampert, C. H., Nickisch, H., & Harmeling, S. (2013). Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 453–465.

    Google Scholar 

  • Lanckriet, G. R. G., Cristianini, N., Bartlett, P., Ghaoui, L. E., & Jordan, M. I. (2004). Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research., 6, 896.

    MATH  Google Scholar 

  • Le, T., & Yamada, M. (2018). Persistence fisher kernel: A Riemannian manifold kernel for persistence diagrams. In Advances in neural information processing systems (pp. 10007–10018).

  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE (pp. 2278—2324).

  • Li, K., Min, M. R., & Fu, Y. (2019). Rethinking zero-shot learning: A conditional visual classification perspective. In IEEE/CVF international conference on computer vision (pp. 3583–3592).

  • Li, W., Wang, L., Xu, J., Huo, J., Yang, G., & Luo, J. (2019). Revisiting local descriptor based image-to-class measure for few-shot learning. In IEEE conference on computer vision and pattern recognition (pp. 7260–7268).

  • Li, W., Zhu, X., & Gong, S. (2018). Harmonious attention network for person re-identification. In IEEE/CVF conference on computer vision and pattern recognition (pp. 2285–2294).

  • Li, W., Zhu, X., & Gong, S. (2019). Scalable person re-identification by harmonious attention. International Journal of Computer Vision, 5, 1635–1653.

    Google Scholar 

  • Liu, Q., Nickel, M., & Kiela, D. (2019). Hyperbolic graph neural networks. In Advances in neural information processing systems (pp. 8230–8241).

  • Liu, S., Chen, J., Pan, L., Ngo, C. W., Chua, T. S., & Jiang, Y. G. (2020). Hyperbolic visual embedding learning for zero-shot recognition. In IEEE/CVF conference on computer vision and pattern recognition (pp. 9273–9281).

  • Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., & Song, L. (2017). Sphereface: Deep hypersphere embedding for face recognition. In IEEE conference on computer vision and pattern recognition (pp. 212–220).

  • Liu, Y., Cao, J., Yuan, B. L. C., Hu, W., Li, Y., & Duan, Y. (2019). Knowledge distillation via instance relationship graph. In IEEE/CVF conference on computer vision and pattern recognition (pp. 7096–7104).

  • Lou, A., Katsman, I., Jiang, Q., Belongie, S., Lim, S. N., & Sa, C. D. (2020). Differentiating through the fréchet mean. In The 37th international conference on machine learning (pp. 6393–6403).

  • Meng, Y., Huang, J., Wang, G., Zhang, C., Zhuang, H., Kaplan, L., & Han, J. (2019). Spherical text embedding. In Advances in neural information processing systems (pp. 8208–8217).

  • Micchelli, C. A., Xu, Y., & Zhang, H. (2006). Universal kernels. Journal of Machine Learning Research, 93, 2651–2667.

    MathSciNet  MATH  Google Scholar 

  • Ong, C. S., Mary, X., Canu, S., & Smola, A. J. (2004). Learning with non-positive kernels. In The 21st international conference on machine learning (pp. 1–8).

  • Oreshkin, B., Rodríguez López, P., & Lacoste, A. (2018). Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in neural information processing systems (pp. 721–731).

  • Park, W., Kim, D., Lu, Y., & Cho, M. (2019). Relational knowledge distillation. In IEEE conference on computer vision and pattern recognition (pp. 3967–3976).

  • Patterson, G., & Hays, J. (2012). Sun attribute database: Discovering, annotating, and recognizing scene attributes. In IEEE conference on computer vision and pattern recognition (pp. 2751–2758).

  • Peng, B., Jin, X., Liu, J., Li, D., Wu, Y., Liu, Y., Zhou, S., & Zhang, Z. (2019). Correlation congruence for knowledge distillation. In IEEE/CVF international conference on computer vision (pp. 5007–5016).

  • Rakotomamonjy, A., Bach, F. R., Canu, S., & Grandvalet, Y. (2008). SimpleMKL. Journal of Machine Learning Research, 6, 214.

    MATH  Google Scholar 

  • Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J. B., Larochelle, H., & Zemel, R. S. (2018). Meta-learning for semi-supervised few-shot classification. In International conference on learning representations (pp. 1–11).

  • Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In European conference on computer vision workshop on benchmarking multi-target tracking (pp. 17–35).

  • Rodríguez, P., Laradji, I., Drouin, A., & Lacoste, A. (2020). Embedding propagation: Smoother manifold for few-shot classification. In European conference on computer vision (pp. 121–138).

  • Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. In IEEE conference on computer vision and pattern recognition (pp. 815–823).

  • Simon, C., Koniusz, P., & Harandi, M. (2021). On learning the geodesic path for incremental learning. In IEEE/CVF conference on computer vision and pattern recognition (pp. 1591–1600).

  • Simon, C., Koniusz, P., Nock, R., & Harandi, M. (2020). Adaptive subspaces for few-shot learning. In IEEE/CVF conference on computer vision and pattern recognition (pp. 4136–4145).

  • Skopek, O., Ganea, O. E., & Bécigneul, G. (2020). Mixed-curvature variational autoencoders. In International conference on learning representations (pp. 1–11).

  • Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. In Advances in neural information processing systems (pp. 4077–4087).

  • Sohn, K. (2016). Improved deep metric learning with multi-class n-pair loss objective. In Advances in neural information processing systems (vol. 29, pp. 1857–1865).

  • Song, H. O., Xiang, Y., Jegelka, S., & Savarese, S. (2016). Deep metric learning via lifted structured feature embedding. In IEEE conference on computer vision and pattern recognition (pp. 4004–4012).

  • Su, C., Li, J., Zhang, S., Xing, J., Gao, W., & Tian, Q. (2017). Pose-driven deep convolutional model for person re-identification. In IEEE/CVF international conference on computer vision (pp. 3960–3969).

  • Sun, Y., Zheng, L., Deng, W., & Wang, S. (2017). Svdnet for pedestrian retrieval. In IEEE international conference on computer vision (pp. 3800–3808).

  • Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., & Hospedales, T. M. (2018). Learning to compare: Relation network for few-shot learning. In IEEE conference on computer vision and pattern recognition (pp. 1199–1208).

  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In IEEE conference on computer vision and pattern recognition (pp. 1–9).

  • Tay, C. P., Roy, S., & Yap, K. H. (2019). Aanet: Attribute attention network for person re-identifications. In IEEE/CVF conference on computer vision and pattern recognition (pp. 7134–7143).

  • Tran, L. V., Tay, Y., Zhang, S., Cong, G., & Li, X. (2020). Hyperml: A boosting metric learning approach in hyperbolic space for recommender systems. In The 13th international conference on web search and data mining.

  • Tung, F., & Mori, G. (2019). Similarity-preserving knowledge distillation. In IEEE/CVF conference on computer vision and pattern recognition (pp. 1365–1374).

  • Ustinova, E., & Lempitsky, V. (2016). Learning deep embeddings with histogram loss. In Advances in neural information processing systems (vol. 29, pp. 4170–4178).

  • Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching networks for one shot learning. In Advances in neural information processing systems (pp. 3630–3638).

  • Wah, C., Branson, S., Welinder, P., Perona, P., & Belongie, S. (2011). The caltech-ucsd birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology.

  • Wang, C., Zhang, Q., Huang, C., Liu, W., & Wang, X. (2018). Mancs: A multi-task attentional network with curriculum sampling for person re-identification. In European conference on computer vision (pp. 384–400).

  • Wang, J., Zhou, F., Wen, S., Liu, X., & Lin, Y. (2017). Deep metric learning with angular loss. In IEEE international conference on computer vision (pp. 2612–2620).

  • Wang, K., Gao, X., Zhao, Y., Li, X., Dou, D., & Xu, C. Z. (2020). Pay attention to features, transfer learn faster CNNS. In International conference on learning representations (pp. 1–14).

  • Wang, T., Zhang, L., & Hu, W. (2021). Bridging deep and multiple kernel learning: A review. Information Fusion., 5, 698.

    Google Scholar 

  • Weinberger, K. Q., & Saul, L. K. (2009). Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 6, 207–244.

    MATH  Google Scholar 

  • Wu, Z., Efros, A. A., & Yu, S. (2018). Improving generalization via scalable neighborhood component analysis. In European conference on computer vision (pp. 712–728).

  • Xian, Y., Akata, Z., Sharma, G., Nguyen, Q., Hein, M., & Schiele, B. (2016). Latent embeddings for zero-shot classification. In IEEE conference on computer vision and pattern recognition (pp. 69–77).

  • Xiang, S., Nie, F., & Zhang, C. (2008). Learning a mahalanobis distance metric for data clustering and classification. Pattern Recognition, 7, 3600–3612.

    Article  MATH  Google Scholar 

  • Xu, C., Fu, Y., Liu, C., Wang, C., Li, J., Huang, F., Zhang, L., & Xue, X. (2021). Learning dynamic alignment via meta-filter for few-shot learning. In IEEE/CVF conference on computer vision and pattern recognition (pp. 5182–5191).

  • Ye, H. J., Hu, H., Zhan, D. C., & Sha, F. (2020). Few-shot learning via embedding adaptation with set-to-set functions. In IEEE/CVF conference on computer vision and pattern recognition (pp. 8808–8817).

  • Ye, M., Shen, J., Lin, G., Xiang, T., Shao, L., & Hoi, S. C. H. (2021). Deep learning for person re-identification: A survey and outlook. In IEEE transactions on pattern analysis and machine intelligence (pp. 2872–2893).

  • Yim, J., Joo, D., Bae, J., & Kim, J. (2017). A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In IEEE conference on computer vision and pattern recognition (pp. 4133–4141).

  • Yu, R., Dou, Z., Bai, S., Zhang, Z., Xu, Y., & Bai, X. (2018). Hard-aware point-to-set deep metric for person re-identification. In European conference on computer vision (pp. 196–212).

  • Zagoruyko, S., & Komodakis, N. (2017). Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In International conference on learning representations.

  • Zhang, C., Cai, Y., Lin, G., & Shen, C. (2020). Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers. In IEEE/CVF conference on computer vision and pattern recognition (pp. 12203–12213).

  • Zhang, F., & Shi, G. (2019). Co-representation network for generalized zero-shot learning. In The 36th international conference on machine learning (pp. 7434–7443).

  • Zhang, L., Xiang, T., & Gong, S. (2017). Learning a deep embedding model for zero-shot learning. In IEEE conference on computer vision and pattern recognition (pp. 2021–2030).

  • Zhang, Z., Lan, C., Zeng, W., Jin, X., & Chen, Z. (2020). Relation-aware global attention for person re-identification. In IEEE/CVF conference on computer vision and pattern recognition (pp. 3186–3195).

  • Zhang, Z., & Saligrama, V. (2015). Zero-shot learning via semantic similarity embedding. In IEEE/CVF International Conference on Computer Vision, pp. 4166–4174.

  • Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., & Tian, Q. (2015). Scalable person re-identification: A benchmark. In IEEE international conference on computer vision (pp. 1116–1124).

  • Zheng, L., Yang, Y., & Hauptmann, A. G. (2016). Person re-identification: Past, present and future. ArXiv:1610.02984 [cs.CV]

  • Zhou, S., Wang, F., Huang, Z., & Wang, J. (2019). Discriminative feature learning with consistent attention regularization for person re-identification. In IEEE/CVF international conference on computer vision (pp. 8040–8049).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pengfei Fang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 410 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fang, P., Harandi, M., Lan, Z. et al. Poincaré Kernels for Hyperbolic Representations. Int J Comput Vis 131, 2770–2792 (2023). https://doi.org/10.1007/s11263-023-01834-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11263-023-01834-6

Keywords