这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

Generalized Out-of-Distribution Detection: A Survey

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems. For instance, in autonomous driving, we would like the driving system to issue an alert and hand over the control to humans when it detects unusual scenes or objects that it has never seen during training time and cannot make a safe decision. The term, OOD detection, first emerged in 2017 and since then has received increasing attention from the research community, leading to a plethora of methods developed, ranging from classification-based to density-based to distance-based ones. Meanwhile, several other problems, including anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD), are closely related to OOD detection in terms of motivation and methodology. Despite common goals, these topics develop in isolation, and their subtle differences in definition and problem setting often confuse readers and practitioners. In this survey, we first present a unified framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e.,AD, ND, OSR, OOD detection, and OD. Under our framework, these five problems can be seen as special cases or sub-tasks, and are easier to distinguish. Despite comprehensive surveys of related fields, the summarization of OOD detection methods remains incomplete and requires further advancement. This paper specifically addresses the gap in recent technical developments in the field of OOD detection. It also provides a comprehensive discussion of representative methods from other sub-tasks and how they relate to and inspire the development of OOD detection methods. The survey concludes by identifying open challenges and potential research directions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

The datasets analyzed during the current study in Sect. 5 are available in the OpenOOD repository, https://github.com/Jingkang50/OpenOOD.

Notes

  1. Align with MSP (Hendrycks & Gimpel, 2017) Check https://github.com/Jingkang50/OpenOOD/issues/206this issue in OpenOOD.

  2. OpenOOD provides a https://zjysteven.github.io/OpenOOD/leaderboard to track SOTAs.

References

  • Abati, D., Porrello, A., Calderara, S., & Cucchiara, R. (2019). Latent space autoregression for novelty detection. In CVPR.

  • Adler, A., Elad, M., Hel-Or, Y., & Rivlin, E. (2015). Sparse coding with anomaly detection. Journal of Signal Processing Systems, 79, 179–188.

    Google Scholar 

  • Aggarwal, C. C., & Yu, P. S. (2001). Outlier detection for high dimensional data. In ACM SIGMOD.

  • Ahmed, F., & Courville, A. (2020). Detecting semantic anomalies. In AAAI.

  • Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410–14430.

    Google Scholar 

  • Akoglu, L., Tong, H., & Koutra, D. (2015). Graph based anomaly detection and description: A survey. Data Mining and Knowledge Discovery, 29, 626–688.

    MathSciNet  Google Scholar 

  • Al-Behadili, H., Grumpe, A., & Wöhler, C. (2015). Incremental learning and novelty detection of gestures in a multi-class system. In AIMS.

  • Altman, D. G., & Bland, J. M. (2005). Standard deviations and standard errors. BMJ, 6, 66.

    Google Scholar 

  • Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety, arXiv preprint arXiv:1606.06565

  • An, J., & Cho, S. (2015). Variational autoencoder based anomaly detection using reconstruction probability. In Special lecture on IE.

  • Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification, arXiv preprint arXiv:2107.07511

  • Atha, D. J., & Jahanshahi, M. R. (2018). Evaluation of deep learning approaches based on convolutional neural networks for corrosion detection. Structural Health Monitoring, 17, 1110–1128.

    Google Scholar 

  • Averly, R., & Chao, W.-L. (2023). Unified out-of-distribution detection: A model-specific perspective, arXiv preprint arXiv:2304.06813

  • Bai, Y., Han, Z., Zhang, C., Cao, B., Jiang, X., & Hu, Q. (2023). Id-like prompt learning for few-shot out-of-distribution detection, arXiv preprint arXiv:2311.15243

  • Bartlett, P. L., & Wegkamp, M. H. (2008). Classification with a reject option using a hinge loss. Journal of Machine Learning Research, 9, 8.

    MathSciNet  Google Scholar 

  • Basu, S., & Meckesheimer, M. (2007). Automatic outlier detection for time series: An application to sensor data. Knowledge and Information Systems, 11, 137–154.

    Google Scholar 

  • Bekker, J., & Davis, J. (2020). Learning from positive and unlabeled data: A survey. Machine Learning, 109, 719–760.

    MathSciNet  Google Scholar 

  • Bendale, A., & Boult, T. (2015). Towards open world recognition. In CVPR.

  • Bendale, A., & Boult, T. E. (2016). Towards open set deep networks. In CVPR.

  • Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. Machine Learning, 79, 151–175.

    MathSciNet  Google Scholar 

  • Ben-Gal, I. (2005). Outlier detection. In Data mining and knowledge discovery handbook.

  • Bergman, L., & Hoshen, Y. (2020). Classification-based anomaly detection for general data. In ICLR.

  • Bergmann, P., Fauser, M., Sattlegger, D., & Steger, C. (2019). Mvtec ad—A comprehensive real-world dataset for unsupervised anomaly detection. In CVPR.

  • Bianchini, M., Belahcen, A., & Scarselli, F. (2016). A comparative study of inductive and transductive learning with feedforward neural networks. In Conference of the Italian Association for artificial intelligence.

  • Bibas, K., Feder, M., & Hassner, T. (2021). Single layer predictive normalized maximum likelihood for out-of-distribution detection. In NeurIPS.

  • Bitterwolf, J., Meinke, A., & Hein, M. (2020). Certifiably adversarially robust detection of out-of-distribution data. In NeurIPS.

  • Bitterwolf, J., Müller, M., & Hein, M. (2023). In or out? fixing imagenet out-of-distribution detection evaluation. In ICML.

  • Bodesheim, P., Freytag, A., Rodner, E., Kemmler, M., & Denzler, J. (2013). Kernel null space methods for novelty detection. In CVPR.

  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., & Brynjolfsson, E. (2021). On the opportunities and risks of foundation models, arXiv preprint arXiv:2108.07258

  • Boult, T. E., Cruz, S., Dhamija, A. R., Gunther, M., Henrydoss, J., & Scheirer, W. J. (2019). Learning and the unknown: Surveying steps toward open world recognition. In AAAI.

  • Breunig, M. M., Kriegel, H.-P., Ng, R. T., & Sander, J. (2000). Lof: identifying density-based local outliers. In SIGMOD.

  • Bulusu, S., Kailkhura, B., Li, B., Varshney, P. K., & Song, D. (2020). Anomalous example detection in deep learning: A survey. IEEE Access, 8, 132330–132347.

    Google Scholar 

  • Cai, F., Ozdagli, A. I., Potteiger, N., & Koutsoukos, X. (2021). Inductive conformal out-of-distribution detection based on adversarial autoencoders. In 2021 IEEE international conference on omni-layer intelligent systems (COINS) (pp. 1–6). IEEE.

  • Cao, A., Luo, Y., & Klabjan, D. (2020). Open-set recognition with Gaussian mixture variational autoencoders. In AAAI.

  • Cao, K., Brbic, M., & Leskovec, J. (2021). Open-world semi-supervised learning, arXiv preprint arXiv:2102.03526

  • Castillo, E. (2012). Extreme value theory in engineering. Elsevier.

  • Chalapathy, R., & Chawla, S. (2019). Deep learning for anomaly detection: A survey, arXiv preprint arXiv:1901.0340

  • Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3), 1–58.

    Google Scholar 

  • Chen, G., Peng, P., Ma, L., Li, J., Du, L., & Tian, Y. (2021a). Amplitude-phase recombination: Rethinking robustness of convolutional neural networks in frequency domain. In ICCV.

  • Chen, G., Qiao, L., Shi, Y., Peng, P., Li, J., Huang, T., Pu, S., & Tian, Y. (2020a). Learning open set network with discriminative reciprocal points. In ECCV.

  • Chen, J., Li, Y., Wu, X., Liang, Y., & Jha, S. (2020b). Robust out-of-distribution detection for neural networks, arXiv preprint arXiv:2003.09711

  • Chen, J., Li, Y., Wu, X., Liang, Y., & Jha, S. (2021c). Atom: Robustifying out-of-distribution detection using outlier mining. In ECML &PKDD.

  • Chen, X., & Gupta, A. (2015). Webly supervised learning of convolutional networks. In ICCV.

  • Chen, X., Lan, X., Sun, F., & Zheng, N. (2020c). A boundary based out-of-distribution classifier for generalized zero-shot learning. In ECCV.

  • Chen, Z., Yeo, C. K., Lee, B. S., & Lau, C. T. (2018). Autoencoder-based network anomaly detection. In Wireless telecommunications symposium.

  • Choi, H., Jang, E., & Alemi, A. A. (2018). Waic, but why? generative ensembles for robust anomaly detection, arXiv preprint arXiv:1810.01392

  • Choi, S., & Chung, S.-Y. (2020). Novelty detection via blurring. In ICLR.

  • Chow, C. (1970). On optimum recognition error and reject tradeoff. IEEE Transactions on Information Theory, 16, 41–6.

    Google Scholar 

  • Chu, W.-H., & Kitani, K. M. (2020). Neural batch sampling with reinforcement learning for semi-supervised anomaly detection. In ECCV.

  • Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20, 273–97.

    Google Scholar 

  • Cultrera, L., Seidenari, L., & Del Bimbo, A. (2023). Leveraging visual attention for out-of-distribution detection. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4447–4456).

  • Dai, Y., Lang, H., Zeng, K., Huang, F., & Li, Y., (2023). Exploring large language models for multi-modal out-of-distribution detection, arXiv preprint arXiv:2310.08027

  • Danuser, G., & Stricker, M. (1998). Parametric model fitting: From inlier characterization to outlier detection. In TPAMI.

  • De Maesschalck, R., Jouan-Rimbaud, D., & Massart, D. L. (2000). The Mahalanobis distance, chemometrics and intelligent laboratory systems.

  • Deecke, L., Vandermeulen, R., Ruff, L., Mandt, S., & Kloft, M. (2018). Image anomaly detection with generative adversarial networks. In ECML &KDD.

  • Denouden, T., Salay, R., Czarnecki, K., Abdelzad, V., Phan, B., & Vernekar, S. (2018). Improving reconstruction autoencoder out-of-distribution detection with Mahalanobis distance, arXiv preprint arXiv:1812.02765

  • Desforges, M., Jacob, P., & Cooper, J. (1998). Applications of probability density estimation to the detection of abnormal conditions in engineering. In Proceedings of the institution of mechanical engineers.

  • DeVries, T., & Taylor, G. W. (2017). Improved regularization of convolutional neural networks with cutout, arXiv preprint arXiv:1708.04552

  • DeVries, T., & Taylor, G. W. (2018). Learning confidence for out-of-distribution detection in neural networks, arXiv preprint arXiv:1802.04865

  • Dhamija, A. R., Günther, M., & Boult, T. E. (2018). Reducing network agnostophobia. In NeurIPS.

  • Diehl, C. P., & Hampshire, J. B. (2002). Real-time object classification and novelty detection for collaborative video surveillance. In IJCNN.

  • Dietterich, T. G. (2000). Ensemble methods in machine learning. In International workshop on multiple classifier systems.

  • Djurisic, A., Bozanic, N., Ashok, A., & Liu, R. (2023). Extremely simple activation shaping for out-of-distribution detection. In ICLR.

  • Dolhansky, B., Howes, R., Pflaum, B., Baram, N., & Ferrer, C. C. (2019). The deepfake detection challenge (dfdc) preview dataset, arXiv preprint arXiv:1910.08854

  • Dong, J., Gao, Y., Zhou, H., Cen, J., Yao, Y., Yoon, S., & Sun, P. D. (2023). Towards few-shot out-of-distribution detection, arXiv preprint arXiv:2311.12076

  • Dong, X., Guo, J., Ang Li, W.-T.T., Liu, C., & Kung, H. (2022a). Neural mean discrepancy for efficient out-of-distribution detection. In CVPR.

  • Dong, X., Guo, J., Li, A., Ting, W.-T., Liu, C., & Kung, H. (2022a). Neural mean discrepancy for efficient out-of-distribution detection. In CVPR.

  • Dou, Y., Li, W., Liu, Z., Dong, Z., Luo, J., & Philip, S. Y. (2019). Uncovering download fraud activities in mobile app markets. In ASONAM.

  • Drummond, N., & Shearer, R. (2006). The open world assumption. In eSI workshop.

  • Du, X., Wang, X., Gozum, G., & Li, Y. (2022a). Unknown-aware object detection: Learning what you don’t know from videos in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.

  • Du, X., Wang, Z., Cai, M., & Li, Y. (2022b). Vos: Learning what you don’t know by virtual outlier synthesis. In Proceedings of the international conference on learning representations.

  • Eskin, E. (2000). Anomaly detection over noisy data using learned probability distributions. In ICML.

  • Esmaeilpour, S., Liu, B., Robertson, E., & Shu, L. (2022). Zero-shot out-of-distribution detection based on the pretrained model clip. In AAAI.

  • Ester, M., Kriegel, H.-P., Sander, J., & Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD.

  • Fang, Z., Li, Y., Lu, J., Dong, J., Han, B., & Liu, F. (2022). Is out-of-distribution detection learnable? In NeurIPS.

  • Fang, Z., Lu, J., Liu, A., Liu, F., & Zhang, G. (2021). Learning bounds for open-set learning. In ICML.

  • Fawcett, T. (2006). An introduction to roc analysis. Pattern Recognition Letters, 27, 861–74.

    Google Scholar 

  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24, 381–395.

    MathSciNet  Google Scholar 

  • Foong, A. Y., Li, Y., Hernández-Lobato, J. M., & Turner, R. E. (2020). ’in-between’ uncertainty in Bayesian neural networks. In ICML-W.

  • Fort, S., Ren, J., & Lakshminarayanan, B. (2021). Exploring the limits of out-of-distribution detection. In NeurIPS.

  • Fumera, G., & Roli, F. (2002). Support vector machines with embedded reject option. In International workshop on support vector machines.

  • Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In ICML.

  • Gamerman, D., & Lopes, H. F. (2006). Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. CRC Press.

  • Gan, W. (2021). Language guided out-of-distribution detection.

  • Gao, P., Geng, S., Zhang, R., Ma, T., Fang, R., Zhang, Y., Li, H., & Qiao, Y. (2023). Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision, 132, 1–15.

    Google Scholar 

  • Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In CVPR.

  • Ge, Z., Demyanov, S., Chen, Z., & Garnavi, R. (2017). Generative openmax for multi-class open set classification. In BMVC.

  • Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The Kitti vision benchmark suite. In CVPR.

  • Gelman, A. (2008). Objections to Bayesian statistics. Bayesian Analysis, 66, 4445–449.

    MathSciNet  Google Scholar 

  • Geng, C., & Chen, S. (2020). Collective decision for open set recognition. In TKDE.

  • Geng, C., Huang, S., & Chen, S. (2020). Recent advances in open set recognition: A survey. In TPAMI.

  • Georgescu, M.-I., Barbalau, A., Ionescu, R. T., Khan, F. S., Popescu, M., & Shah, M. (2021). Anomaly detection in video via self-supervised and multi-task learning. In CVPR.

  • Golan, I., & El-Yaniv, R. (2018). Deep anomaly detection using geometric transformations. In NeurIPS.

  • Goldstein, M., & Dengel, A. (2012). Histogram-based outlier score (hbos): A fast unsupervised anomaly detection algorithm. In KI-2012: Poster and demo track.

  • Gomes, E. D. C., Alberge, F., Duhamel, P., & Piantanida, P. (2022). Igeood: An information geometry approach to out-of-distribution detection. In ICLR.

  • Gong, D., Liu, L., Le, V., Saha, B., Mansour, M. R., Venkatesh, S., & Hengel, A. V. D. (2019). Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In CVPR.

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In NIPS.

  • Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In ICLR.

  • Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I., & Sugiyama, M. (2018). Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NIPS.

  • Han, K., Vedaldi, A., & Zisserman, A. (2019). Learning to discover novel visual categories via deep transfer clustering. In CVPR.

  • Han, X., Chen, X., & Liu, L.-P. (2020). Gan ensemble for anomaly detection, arXiv preprint arXiv:2012.07988

  • Hautamaki, V., Karkkainen, I., & Franti, P. (2004). Outlier detection using k-nearest neighbour graph. In ICPR.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR.

  • Hein, M., Andriushchenko, M., & Bitterwolf, J. (2019). Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In CVPR.

  • Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022a) Scaling out-of-distribution detection for real-world settings. In ICML.

  • Hendrycks, D., Carlini, N., Schulman, J., & Steinhardt, J. (2021). Unsolved problems in ML safety. arXiv preprint, arXiv:2109.13916

  • Hendrycks, D., & Gimpel, K. (2017). A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR.

  • Hendrycks, D., Lee, K., & Mazeika, M. (2019a). Using pre-training can improve model robustness and uncertainty. In International conference on machine learning (pp. 2712–2721). PMLR.

  • Hendrycks, D., Liu, X., Wallace, E., Dziedzic, A., Krishnan, R., & Song, D. (2020). Pretrained transformers improve out-of-distribution robustness, arXiv preprint arXiv:2004.06100

  • Hendrycks, D., & Mazeika, M. (2022). X-risk analysis for AI research. arXiv preprint, arXiv:2206.05862

  • Hendrycks, D., Mazeika, M., & Dietterich, T. (2019b). Deep anomaly detection with outlier exposure. In ICLR.

  • Hendrycks, D., Mu, N., Cubuk, E. D., Zoph, B., Gilmer, J., & Lakshminarayanan, B. (2019c). Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781

  • Hendrycks, D., Zou, A., Mazeika, M., Tang, L., Song, D., & Steinhardt, J. (2022c). Pixmix: Dreamlike pictures comprehensively improve safety measures.

  • Hodge, V., & Austin, J. (2004). A survey of outlier detection methodologies. Artificial Intelligence Review, 22, 85–126.

    Google Scholar 

  • Hsu, Y.-C., Shen, Y., Jin, H., & Kira, Z. (2020). Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR.

  • Hu, W., Gao, J., Li, B., Wu, O., Du, J., & Maybank, S. (2018). Anomaly detection using local kernel density estimation and context-based regression. In TKDE.

  • Huang, H., Li, Z., Wang, L., Chen, S., Dong, B., & Zhou, X. (2020a). Feature space singularity for out-of-distribution detection, arXiv preprint arXiv:2011.14654

  • Huang, R., Geng, A., & Li, Y. (2021). On the importance of gradients for detecting distributional shifts in the wild. In NeurIPS.

  • Huang, R., & Li, Y. (2021). Mos: Towards scaling out-of-distribution detection for large semantic space. In CVPR.

  • Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M., & Yi, X. (2020b). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37, 100270.

  • Hyvärinen, A., & Dayan, P. (2005). Estimation of non-normalized statistical models by score matching.

  • Idrees, H., Shah, M., & Surette, R. (2018). Enhancing camera surveillance using computer vision: A research note. Policing: An International Journal, 41, 292–307.

    Google Scholar 

  • Igoe, C., Chung, Y., Char, I., & Schneider, J. (2022). How useful are gradients for ood detection really? arXiv preprint arXiv:2205.10439

  • Izenman, A. J. (1991). Review papers: Recent developments in nonparametric density estimation. Journal of the American Statistical Association, 86, 205–224.

    MathSciNet  Google Scholar 

  • Jain, L. P., Scheirer, W. J., & Boult, T. E. (2014). Multi-class open set recognition using probability of inclusion. In ECCV.

  • Jang, J., & Kim, C. O. (2020). One-vs-rest network-based deep probability model for open set recognition, arXiv preprint arXiv:2004.08067

  • Jaskie, K., & Spanias, A. (2019). Positive and unlabeled learning algorithms and applications: A survey. In International conference on information, intelligence, systems and applications.

  • Jaynes, E. T. (1986). Bayesian methods: General background.

  • Jeong, T., & Kim, H. (2020). Ood-maml: Meta-learning for few-shot out-of-distribution detection and classification. In NeurIPS.

  • Jia, M., Tang, L., Chen, B.-C., Cardie, C., Belongie, S., Hariharan, B., & Lim, S.-N. (2022). Visual prompt tuning. In European conference on computer vision (pp. 709–727). Springer.

  • Jia, X., Han, K., Zhu, Y., & Green, B. (2021). Joint representation learning and novel category discovery on single-and multi-modal data. In ICCV.

  • Jiang, D., Sun, S., & Yu, Y. (2021a). Revisiting flow generative models for out-of-distribution detection. In International conference on learning representations.

  • Jiang, K., Xie, W., Lei, J., Jiang, T., & Li, Y. (2021b). Lren: Low-rank embedded network for sample-free hyperspectral anomaly detection. In AAAI.

  • Jiang, L., Guo, Z., Wu, W., Liu, Z., Liu, Z., Loy, C.C., Yang, S., Xiong, Y., Xia, W., Chen, B., Zhuang, P., Li, S., Chen, S., Yao, T., Ding, S., Li, J., Huang, F., Cao, L., Ji, R., Lu, C., & Tan, G. (2021c). DeeperForensics Challenge 2020 on real-world face forgery detection: Methods and results, arXiv preprint arXiv:2102.09471

  • Jiang, W., Cheng, H., Chen, M., Feng, S., Ge, Y., & Wang, C. (2023a). Read: Aggregating reconstruction error into out-of-distribution detection. In AAAI.

  • Jiang, X., Liu, F., Fang, Z., Chen, H., Liu, T., Zheng, F., & Han, B. (2023b). Detecting out-of-distribution data through in-distribution class prior. In International conference on machine learning (pp. 15067–15088). PMLR.

  • Jiang, X., Liu, F., Fang, Z., Chen, H., Liu, T., Zheng, F., & Han, B. (2023c). Negative label guided ood detection with pretrained vision-language models. In The twelfth international conference on learning representations.

  • Joseph, K., Paul, S., Aggarwal, G., Biswas, S., Rai, P., Han, K., & Balasubramanian, V. N. (2022). Novel class discovery without forgetting. In ECCV.

  • Júnior, P.R.M., De Souza, R. M., Werneck, R. D. O., Stein, B. V., Pazinato, D. V., de Almeida, W. R., Penatti, O. A., Torres, R. D. S., & Rocha, A. (2017). Nearest neighbors distance ratio open-set classifier. Machine Learning, 6, 66.

  • Katz-Samuels, J., Nakhleh, J., Nowak, R., & Li, Y. (2022). Training ood detectors in their natural habitats. In International conference on machine learning (ICML). PMLR.

  • Kaur, R., Jha, S., Roy, A., Park, S., Dobriban, E., Sokolsky, O., & Lee, I. (2022a). idecode: In-distribution equivariance for conformal out-of-distribution detection. In Proceedings of the AAAI conference on artificial intelligence (vol. 36, pp. 7104–7114).

  • Kaur, R., Sridhar, K., Park, S., Jha, S., Roy, A., Sokolsky, O., & Lee, I. (2022b). Codit: Conformal out-of-distribution detection in time-series data, arXiv e-prints.

  • Kerner, H. R., Wellington, D. F., Wagstaff, K. L., Bell, J. F., Kwan, C., & Amor, H. B. (2019). Novelty detection for multispectral images with application to planetary exploration. In AAAI.

  • Kim, J.-H., Yun, S., & Song, H. O. (2023). Neural relation graph: A unified framework for identifying label noise and outlier data. In Thirty-seventh conference on neural information processing systems.

  • Kim, K., Shin, J., & Kim, H. (2021). Locally most powerful Bayesian test for out-of-distribution detection using deep generative models. In NeurIPS.

  • Kind, A., Stoecklin, M. P., & Dimitropoulos, X. (2009). Histogram-based traffic anomaly detection. IEEE Transactions on Network and Service Management, 6, 110–121.

    Google Scholar 

  • Kingma, D. P., & Dhariwal, P. (2018). Glow: Generative flow with invertible 1x1 convolutions. In NeurIPS.

  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes, arXiv preprint arXiv:1312.6114

  • Kirichenko, P., Izmailov, P., & Wilson, A. G. (2020). Why normalizing flows fail to detect out-of-distribution data. in NeurIPS.

  • Kobyzev, I., Prince, S., & Brubaker, M. (2020). Normalizing flows: An introduction and review of current methods. In TPAMI.

  • Koh, P. W., Sagawa, S., Marklund, H., Xie, S. M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R. L., Gao, I., & Lee, T. (2021). Wilds: A benchmark of in-the-wild distribution shifts. In International conference on machine learning (pp. 5637–5664). PMLR.

  • Kong, S., & Ramanan, D. (2021). Opengan: Open-set recognition via open data generation. In ICCV.

  • Kou, Y., Lu, C.-T., & Dos Santos, R. F. (2007). Spatial outlier detection: A graph-based approach. In 19th IEEE international conference on tools with artificial intelligence (ICTAI).

  • Kramer, M. A. (1991). Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal, 37, 233–243.

    Google Scholar 

  • Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.

  • Krizhevsky, A., Nair, V., & Hinton, G. (2009). Cifar-10 and cifar-100 datasets (vol. 6, (no. 1), p. 1). https://www.cs.toronto.edu/kriz/cifar.html

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In NIPS.

  • Kwon, G., Prabhushankar, M., Temel, D., & AlRegib, G. (2020). Backpropagated gradient representations for anomaly detection. In ECCV.

  • Kylberg, G. (2011). Kylberg texture dataset v. 1.0.

  • Lai, C.-H., Zou, D., & Lerman, G. (2020). Robust subspace recovery layer for unsupervised anomaly detection. In ICLR.

  • Lakshminarayanan, B., Pritzel, A., & Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS.

  • LeCun, Y., & Cortes, C. (2005). The mnist database of handwritten digits.

  • Lee, K., Lee, H., Lee, K., & Shin, J. (2018a). Training confidence-calibrated classifiers for detecting out-of-distribution samples.

  • Lee, K., Lee, K., Lee, H., & Shin, J. (2018b). A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS.

  • Lee, K., Lee, K., Min, K., Zhang, Y., Shin, J., & Lee, H. (2018c). Hierarchical novelty detection for visual object recognition. In CVPR.

  • Leys, C., Klein, O., Dominicy, Y., & Ley, C. (2018). Detecting multivariate outliers: Use a robust variant of the Mahalanobis distance. Journal of Experimental Social Psychology, 74, 150–156.

    Google Scholar 

  • Leys, C., Ley, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766.

    Google Scholar 

  • Li, A., Miao, Z., Cen, Y., & Cen, Y. (2017a). Anomaly detection using sparse reconstruction in crowded scenes. Multimedia Tools and Applications, 76, 26249–26271.

  • Li, B., Zhang, Y., Chen, L., Wang, J., Yang, J., & Liu, Z. (2023a). Otter: A multi-modal model with in-context instruction tuning, arXiv preprintarXiv:2305.03726

  • Li, D., Yang, Y., Song, Y.-Z., & Hospedales, T. M. (2017b). Deeper, broader and artier domain generalization. In ICCV.

  • Li, J., Chen, P., Yu, S., He, Z., Liu, S., & Jia, J. (2023b). Rethinking out-of-distribution (ood) detection: Masked image modeling is all you need. In CVPR.

  • Li, J., Xiong, C., & Hoi, S. C. (2021). Mopro: Webly supervised learning with momentum prototypes. In ICLR.

  • Li, L.-J., & Fei-Fei, L. (2010). Optimol: Automatic online picture collection via incremental model learning. In IJCV.

  • Li, Y., & Vasconcelos, N. (2020). Background data resampling for outlier-aware classification. In CVPR.

  • Li, Y., Yang, J., Song, Y., Cao, L., Luo, J., & Li, L.-J. (2017). Learning from noisy labels with distillation. In CVPR.

  • Liang, S., Li, Y., & Srikant, R. (2018). Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR.

  • Lin, Z., Roy, S.D., & Li, Y. (2021). Mood: Multi-level out-of-distribution detection. In CVPR.

  • Linderman, R., Zhang, J., Inkawhich, N., Li, H., & Chen, Y. (2023). Fine-grain inference on out-of-distribution data with hierarchical classification. In S. Chandar, R. Pascanu, H. Sedghi, & D. Precup (Eds.) Proceedings of the 2nd conference on lifelong learning agents (vol. 232 of Proceedings of Machine Learning Research, pp. 162–183). PMLR.

  • Liu, B., Kang, H., Li, H., Hua, G., & Vasconcelos, N. (2020a). Few-shot open-set recognition using meta-learning. In CVPR.

  • Liu, F. T., Ting, K. M., & Zhou, Z.-H. (2008). Isolation forest. In ICDM.

  • Liu, H., Li, C., Wu, Q., & Lee, Y. J. (2023). Visual instruction tuning, arXiv preprint arXiv:2304.08485

  • Liu, H., Li, X., Zhou, W., Chen, Y., He, Y., Xue, H., Zhang, W., & Yu, N. (2021). Spatial-phase shallow learning: Rethinking face forgery detection in frequency domain. In CVPR.

  • Liu, H., Shah, S., & Jiang, W. (2004). On-line outlier detection and data cleaning. Computers & Chemical Engineering, 28, 1635–1647.

    Google Scholar 

  • Liu, J., Lian, Z., Wang, Y., & Xiao, J. (2017). Incremental kernel null space discriminant analysis for novelty detection. In CVPR.

  • Liu, S., Garrepalli, R., Dietterich, T., Fern, A., & Hendrycks, D. (2018a). Open category detection with pac guarantees. In ICML.

  • Liu, W., He, J., & Chang, S.-F. (2010). Large graph construction for scalable semi-supervised learning. In ICML.

  • Liu, W., Luo, W., Lian, D., & Gao, S. (2018b). Future frame prediction for anomaly detection—A new baseline. In CVPR.

  • Liu, W., Wang, X., Owens, J. D., & Li, Y. (2020b). Energy-based out-of-distribution detection. In NeurIPS.

  • Liu, X., Lochman, Y., & Zach, C. (2023). Gen: Pushing the limits of softmax-based out-of-distribution detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 23946–23955).

  • Liu, Z., Miao, Z., Pan, X., Zhan, X., Lin, D., Yu, S. X., & Gong, B. (2020c). Open compound domain adaptation. In CVPR.

  • Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., & Yu, S. X. (2019). Large-scale long-tailed recognition in an open world. In CVPR.

  • Loureiro, A., Torgo, L., & Soares, C. (2004). Outlier detection using clustering methods: A data cleaning application. In Proceedings of KDNet symposium on knowledge-based systems.

  • Lu, F., Zhu, K., Zhai, W., Zheng, K., & Cao, Y. (2023). Uncertainty-aware optimal transport for semantically coherent out-of-distribution detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3282–3291).

  • Lu, F., Zhu, K., Zheng, K., Zhai, W., & Cao, Y. (2023). Likelihood-aware semantic alignment for full-spectrum out-of-distribution detection, arXiv preprint arXiv:2312.01732

  • Mackay, D. J. C. (1992). Bayesian methods for adaptive models. PhD thesis, California Institute of Technology.

  • Maddox, W. J., Izmailov, P., Garipov, T., Vetrov, D. P., & Wilson, A. G. (2019). A simple baseline for Bayesian uncertainty in deep learning. Advances in Neural Information Processing Systems, 32, 13153–13164.

    Google Scholar 

  • Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks, ICLR.

  • Mahdavi, A., & Carvalho, M. (2021). A survey on open set recognition, arXiv preprint arXiv:2109.00893

  • Malinin, A., & Gales, M. (2018). Predictive uncertainty estimation via prior networks. In NeurIPS.

  • Malinin, A., & Gales, M. (2019). Reverse kl-divergence training of prior networks: Improved uncertainty and adversarial robustness. In NeurIPS.

  • Markou, M., & Singh, S. (2003a). Novelty detection: A review-part 1: Statistical approaches. Signal Processing, 83, 2481–97.

  • Markou, M., & Singh, S. (2003b). Novelty detection: A review-part 2: Neural network based approaches. Signal Processing, 83, 2499–2521.

  • Masana, M., Ruiz, I., Serrat, J., van de Weijer, J., & Lopez, A. M. (2018). Metric learning for novelty and anomaly detection. In BMVC.

  • Meinke, A., & Hein, M. (2019). Towards neural networks that provably know when they don’t know, arXiv preprint arXiv:1909.12180

  • Miljković, D. (2010). Review of novelty detection methods. In MIPRO.

  • Ming, Y., Cai, Z., Gu, J., Sun, Y., Li, W., & Li, Y. (2022a). Delving into out-of-distribution detection with vision-language representations. Advances in Neural Information Processing Systems, 35, 35087–35102.

  • Ming, Y., Fan, Y., & Li, Y. (2022b). Poem: Out-of-distribution detection with posterior sampling. In ICML.

  • Ming, Y., & Li, Y. (2023). How does fine-tuning impact out-of-distribution detection for vision-language models? In IJCV.

  • Ming, Y., Sun, Y., Dia, O., & Li, Y. (2023). Cider: Exploiting hyperspherical embeddings for out-of-distribution detection. In ICLR.

  • Ming, Y., Yin, H., & Li, Y. (2022c). On the impact of spurious correlation for out-of-distribution detection. In AAAI.

  • Mingqiang, Z., Hui, H., & Qian, W. (2012). A graph-based clustering algorithm for anomaly intrusion detection. In International conference on Computer Science & Education (ICCSE).

  • Miyai, A., Yang, J., Zhang, J., Ming, Y., Yu, Q., Irie, G., Li, Y., Li, H., Liu, Z., & Aizawa, K. (2024). Unsolvable problem detection: Evaluating trustworthiness of vision language models. arXiv preprint, arXiv:2403.20331

  • Miyai, A., Yu, Q., Irie, G., & Aizawa, K. (2023a). Can pre-trained networks detect familiar out-of-distribution data? arXiv preprint arXiv:2310.00847

  • Miyai, A., Yu, Q., Irie, G., & Aizawa, K. (2023b). Locoop: Few-shot out-of-distribution detection via prompt learning, arXiv preprint arXiv:2306.01293

  • Mo, X., Monga, V., Bala, R., & Fan, Z. (2013). Adaptive sparse representations for video anomaly detection. IEEE Transactions on Circuits and Systems for Video Technology, 24(4), 631–45.

    Google Scholar 

  • Mohseni, S., Pitale, M., Yadawa, J., & Wang, Z. (2020). Self-supervised learning for generalizable out-of-distribution detection. In AAAI.

  • Mohseni, S., Wang, H., Yu, Z., Xiao, C., Wang, Z., & Yadawa, J. (2021). Practical machine learning safety: A survey and primer. arXiv preprint, arXiv:2106.04823

  • Morteza, P., & Li, Y. (2022). Provable guarantees for understanding out-of-distribution detection. In AAAI.

  • Muhlenbach, F., Lallich, S., & Zighed, D. A. (2004). Identifying and handling mislabelled instances. Journal of Intelligent Information Systems, 22, 89–109.

    Google Scholar 

  • Münz, G., Li, S., & Carle, G. (2007). Traffic anomaly detection using k-means clustering. In GI/ITG workshop MMBnet.

  • Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., & Lakshminarayanan, B. (2018). Do deep generative models know what they don’t know? In NeurIPS.

  • Nandy, J., Hsu, W., & Lee, M. L. (2020). Towards maximizing the representation gap between in-domain & out-of-distribution examples. In NeurIPS.

  • Neal, L., Olson, M., Fern, X., Wong, W.-K., & Li, F. (2018). Open set learning with counterfactual images. In ECCV.

  • Neal, R. M. (2012). Bayesian learning for neural networks.

  • Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., & Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning.

  • Ngiam, J., Chen, Z., Koh, P. W., & Ng, A. Y. (2011). Learning deep energy models. In ICML.

  • Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR.

  • Nguyen, D. T., Lou, Z., Klar, M., & Brox, T. (2019). Anomaly detection with multiple-hypotheses predictions. In ICML.

  • Nguyen, D. T., Mummadi, C. K., Ngo, T. P. N., Nguyen, T. H. P., Beggel, L., & Brox, T. (2020). Self: Learning to filter noisy labels with self-ensembling. In ICLR.

  • Nguyen, V. D. (2022). Out-of-distribution detection for lidar-based 3d object detection, Master’s thesis, University of Waterloo.

  • Nie, J., Zhang, Y., Fang, Z., Liu, T., Han, B., & Tian, X. (2023). Out-of-distribution detection with negative prompts. In The twelfth international conference on learning representations.

  • Nixon, K. A., Aimale, V., & Rowe, R. K. (2008). Spoof detection schemes. In Handbook of biometrics.

  • Noble, C. C., & Cook, D. J. (2003). Graph-based anomaly detection. In SIGKDD.

  • Orair, G. H., Teixeira, C. H., Meira, W., Jr., Wang, Y., & Parthasarathy, S. (2010). Distance-based outlier detection: consolidation and renewed bearing. In Proceedings of the VLDB endowment.

  • Osawa, K., Swaroop, S., Jain, A., Eschenhagen, R., Turner, R. E., Yokota, R., & Khan, M. E. (2019). Practical deep learning with Bayesian principles. In NeurIPS.

  • Oza, P., & Patel, V. M. (2019). C2ae: Class conditioned auto-encoder for open-set recognition. In CVPR.

  • Panareda Busto, P., & Gall, J. (2017). Open set domain adaptation. In ICCV.

  • Pang, G., Shen, C., Cao, L., & Hengel, A. V. D. (2020). Deep learning for anomaly detection: A review, arXiv preprint arXiv:2007.02500

  • Papadopoulos, A.-A., Rajati, M. R., Shaikh, N., & Wang, J. (2021). Outlier exposure with confidence control for out-of-distribution detection. Neurocomputing, 441, 138–150.

    Google Scholar 

  • Park, H., Noh, J., & Ham, B. (2020). Learning memory-guided normality for anomaly detection. In CVPR.

  • Park, J., Chai, J. C. L., Yoon, J., & Teoh, A. B. J. (2023a). Understanding the feature norm for out-of-distribution detection. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1557–1567).

  • Park, J., Jung, Y. G., & Teoh, A. B. J. (2023b). Nearest neighbor guidance for out-of-distribution detection. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1686–1695).

  • Parmar, J., Chouhan, S., Raychoudhury, V., & Rathore, S. (2023). Open-world machine learning: Applications, challenges, and opportunities. ACM Computing Surveys, 55(10), 1–37.

    Google Scholar 

  • Parzen, E. (1962). On estimation of a probability density function and mode. The Annals of Mathematical Statistics, 33, 1065–1076.

    MathSciNet  Google Scholar 

  • Patel, K., Han, H., & Jain, A. K. (2016). Secure face unlock: Spoof detection on smartphones. IEEE Transactions on Information Forensics and Security, 11, 2268–2283.

    Google Scholar 

  • Pathak, D., Agrawal, P., Efros, A. A., & Darrell, T. (2017). Curiosity-driven exploration by self-supervised prediction. In ICML.

  • Perera, P., Morariu, V. I., Jain, R., Manjunatha, V., Wigington, C., Ordonez, V., & Patel, V. M. (2020). Generative-discriminative feature representations for open-set recognition. In CVPR.

  • Perera, P., Nallapati, R., & Xiang, B. (2019). Ocgan: One-class novelty detection using gans with constrained latent representations. In CVPR.

  • Perera, P., & Patel, V. M. (2019). Deep transfer learning for multiple class novelty detection. In CVPR.

  • Peterson, C., & Hartman, E. (1989). Explorations of the mean field theory learning algorithm. Neural Networks, 2, 475–494.

    Google Scholar 

  • Pidhorskyi, S., Almohsen, R., Adjeroh, D. A., & Doretto, G. (2018). Generative probabilistic novelty detection with adversarial autoencoders. In NeurIPS.

  • Pimentel, M. A., Clifton, D. A., Clifton, L., & Tarassenko, L. (2014). A review of novelty detection. Signal Processing, 99, 215–249.

    Google Scholar 

  • Pleiss, G., Souza, A., Kim, J., Li, B., & Weinberger, K. Q. (2019). Neural network out-of-distribution detection for regression tasks.

  • Polatkan, G., Jafarpour, S., Brasoveanu, A., Hughes, S., & Daubechies, I. (2009). Detection of forgery in paintings using supervised learning. In ICIP.

  • Powers, D. M. (2020). Evaluation: From precision, recall and f-measure to roc, informedness, markedness and correlation. In JMLT.

  • Qui nonero-Candela, J., Sugiyama, M., Lawrence, N. D., & Schwaighofer, A. (2009). Dataset shift in machine learning. MIT Press.

  • Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., & Krueger, G. (2021). Learning transferable visual models from natural language supervision. In ICML.

  • Redner, R. A., & Walker, H. F. (1984). Mixture densities, maximum likelihood and the em algorithm. SIAM Review, 26(2), 195–239.

    MathSciNet  Google Scholar 

  • Ren, J., Fort, S., Liu, J., Roy, A. G., Padhy, S., & Lakshminarayanan, B. (2021). A simple fix to Mahalanobis distance for improving near-ood detection, arXiv preprint arXiv:2106.09022

  • Ren, J., Liu, P.J., Fertig, E., Snoek, J., Poplin, R., DePristo, M. A., Dillon, J. V., & Lakshminarayanan, B. (2019). Likelihood ratios for out-of-distribution detection. In NeurIPS.

  • Rezende, D., & Mohamed, S. (2015). Variational inference with normalizing flows. In ICML.

  • Rudd, E. M., Jain, L. P., Scheirer, W. J., & Boult, T. E. (2017). The extreme value machine. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3), 762–768.

    Google Scholar 

  • Ruff, L., Kauffmann, J. R., Vandermeulen, R. A., Montavon, G., Samek, W., Kloft, M., Dietterich, T. G., & Müller, K.-R. (2021). A unifying review of deep and shallow anomaly detection. In Proceedings of the IEEE.

  • Ruff, L., Vandermeulen, R., Goernitz, N., Deecke, L., Siddiqui, S. A., Binder, A., Müller, E., & Kloft, M. (2018). Deep one-class classification. In ICML.

  • Ruff, L., Vandermeulen, R. A., Görnitz, N., Binder, A., Müller, K.-R. Müller, E., & Kloft, M. (2020). Deep semi-supervised anomaly detection. In ICLR.

  • Sabokrou, M., Khalooei, M., Fathy, M., & Adeli, E. (2018). Adversarially learned one-class classifier for novelty detection. In CVPR.

  • Salehi, M., Mirzaei, H., Hendrycks, D., Li, Y., Rohban, M. H., & Sabokrou, M. (2021). A unified survey on anomaly, novelty, open-set, and out-of-distribution detection: Solutions and future challenges, arXiv preprint arXiv:2110.14051

  • Sastry, C. S., & Oore, S. (2019). Detecting out-of-distribution examples with in-distribution examples and gram matrices. In NeurIPS-W.

  • Sastry, C. S., & Oore, S. (2020). Detecting out-of-distribution examples with gram matrices. In ICML.

  • Scheirer, W. J., de Rezende Rocha, A., Sapkota, A., & Boult, T. E. (2013). Toward open set recognition. In TPAMI.

  • Scheirer, W. J., Jain, L. P., & Boult, T. E. (2014). Probability models for open set recognition. In TPAMI.

  • Schlachter, P., Liao, Y., & Yang, B. (2019). Open-set recognition using intra-class splitting. In EUSIPCO.

  • Sedlmeier, A., Gabor, T., Phan, T., Belzner, L., & Linnhoff-Popien, C. (2019). Uncertainty-based out-of-distribution detection in deep reinforcement learning, arXiv preprint arXiv:1901.02219

  • Serrà, J., Álvarez, D., Gómez, V., Slizovskaia, O., Nú nez, J. F., & Luque, J. (2020). Input complexity and out-of-distribution detection with likelihood-based generative models.

  • Shafaei, A., Schmidt, M., & Little, J. J. (2019). A less biased evaluation of out-of-distribution sample detectors. In BMVC.

  • Shafer, G., & Vovk, V. (2008). A tutorial on conformal prediction. Journal of Machine Learning Research, 9(3), 66.

    MathSciNet  Google Scholar 

  • Shalev, G., Adi, Y., & Keshet, J. (2018). Out-of-distribution detection using multiple semantic label representations. In NeurIPS.

  • Shalev, G., Shalev, G.-L., & Keshet, J. (2022). A baseline for detecting out-of-distribution examples in image captioning. arXiv preprint, arXiv:2207.05418

  • Shao, R., Perera, P., Yuen, P. C., & Patel, V. M. (2020). Open-set adversarial defense. In ECCV.

  • Shu, Y., Cao, Z., Wang, C., Wang, J., & Long, M. (2021). Open domain generalization with domain-augmented meta-learning. In CVPR.

  • Shu, Y., Shi, Y., Wang, Y., Huang, T., & Tian, Y. (2020). p-odn: Prototype-based open deep network for open set recognition. Scientific Reports, 10, 7146.

    Google Scholar 

  • Smith, R. L. (1990). Extreme value theory. Handbook of Applicable Mathematics, 7, 18.

    Google Scholar 

  • Sorio, E., Bartoli, A., Davanzo, G., & Medvet, E. (2010). Open world classification of printed invoices. In Proceedings of the 10th ACM symposium on document engineering.

  • Sricharan, K., & Srivastava, A. (2018). Building robust classifiers through generation of confident out of distribution examples. In NeurIPS-W.

  • Sugiyama, M., & Borgwardt, K. (2013). Rapid distance-based outlier detection via sampling. In NIPS.

  • Sun, X., Ding, H., Zhang, C., Lin, G., & Ling, K.-V. (2021a). M2iosr: Maximal mutual information open set recognition, arXiv preprint arXiv:2108.02373

  • Sun, X., Yang, Z., Zhang, C., Ling, K.-V., & Peng, G. (2020). Conditional Gaussian distribution learning for open set recognition. In CVPR.

  • Sun, Y., Guo, C., & Li, Y. (2021b). React: Out-of-distribution detection with rectified activations. In NeurIPS.

  • Sun, Y., & Li, Y. (2022). Dice: Leveraging sparsification for out-of-distribution detection. In ECCV.

  • Sun, Y., Ming, Y., Zhu, X., & Li, Y. (2022). Out-of-distribution detection with deep nearest neighbors. In ICML.

  • Syarif, I., Prugel-Bennett, A., & Wills, G. (2012). Unsupervised clustering approach for network anomaly detection. In International conference on networked digital technologies.

  • Tack, J., Mo, S., Jeong, J., & Shin, J. (2020). Csi: Novelty detection via contrastive learning on distributionally shifted instances. In NeurIPS.

  • Tao, L., Du, X., Zhu, X., & Li, Y. (2023). Non-parametric outlier synthesis. In ICLR.

  • Tariq, M. I., Memon, N. A., Ahmed, S., Tayyaba, S., Mushtaq, M. T., Mian, N. A., Imran, M., & Ashraf, M. W. (2020). A review of deep learning security and privacy defensive techniques. Mobile Information Systems, 2020, 1–8.

    Google Scholar 

  • Tax, D. M. J. (2002). One-class classification: Concept learning in the absence of counter-examples.

  • Techapanurak, E., Suganuma, M., & Okatani, T. (2020). Hyperparameter-free out-of-distribution detection using cosine similarity. In ACCV.

  • Thulasidasan, S., Chennupati, G., Bilmes, J., Bhattacharya, T., & Michalak, S. (2019). On mixup training: Improved calibration and predictive uncertainty for deep neural networks. In NeurIPS.

  • Thulasidasan, S., Thapa, S., Dhaubhadel, S., Chennupati, G., Bhattacharya, T., & Bilmes, J. (2021). An effective baseline for robustness to distributional shift, arXiv preprint arXiv:2105.07107

  • Tian, J., Azarian, M. H., & Pecht, M. (2014). Anomaly detection using self-organizing maps-based k-nearest neighbor algorithm. In PHM society European conference.

  • Tian, K., Zhou, S., Fan, J., & Guan, J. (2019). Learning competitive and discriminative reconstructions for anomaly detection. In AAAI.

  • Torralba, A., Fergus, R., & Freeman, W. T. (2008). 80 million tiny images: A large data set for nonparametric object and scene recognition. In TPAMI.

  • Turcotte, M., Moore, J., Heard, N., & McPhall, A. (2016). Poisson factorization for peer-based anomaly detection. In IEEE conference on intelligence and security informatics (ISI).

  • Van Amersfoort, J., Smith, L., Teh, Y. W., & Gal, Y. (2020). Uncertainty estimation using a single deep deterministic neural network. In ICML.

  • Van den Broeck, J., Argeseanu Cunningham, S., Eeckels, R., & Herbst, K. (2005). Data cleaning: Detecting, diagnosing, and editing data abnormalities. PLoS Medicine, 2, 267.

    Google Scholar 

  • Van Oord, A., Kalchbrenner, N., & Kavukcuoglu, K. (2016). Pixel recurrent neural networks. In ICML.

  • Van Ryzin, J. (1973). A histogram method of density estimation. Communications in Statistics-Theory and Methods, 2, 493–506.

    MathSciNet  Google Scholar 

  • Vaze, S., Han, K., Vedaldi, A., & Zisserman, A. (2022a). Generalized category discovery. In CVPR.

  • Vaze, S., Han, K., Vedaldi, A., & Zisserman, A. (2022b). Open-set recognition: A good closed-set classifier is all you need. In ICLR.

  • Vernekar, S., Gaurav, A., Abdelzad, V., Denouden, T., Salay, R., & Czarnecki, K. (2019). Out-of-distribution detection in classifiers via generation. In NeurIPS-W.

  • Vinyals, O., Ewalds, T., Bartunov, S., Georgiev, P., Vezhnevets, A. S., Yeo, M., Makhzani, A., Küttler, H., Agapiou, J., Schrittwieser, J., & Quan, J. (2017). Starcraft II: A new challenge for reinforcement learning, arXiv preprint arXiv:1708.04782

  • Vyas, A., Jammalamadaka, N., Zhu, X., Das, D., Kaul, B., & Willke, T. L. (2018). Out-of-distribution detection using an ensemble of self supervised leave-out classifiers. In ECCV.

  • Wang, H., Bah, M. J., & Hammad, M. (2019a). Progress in outlier detection techniques: A survey. IEEE Access, 7, 107964–108000.

  • Wang, H., Li, Y., Yao, H., & Li, X. (2023a). Clipn for zero-shot ood detection: Teaching clip to say no. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1802–1812).

  • Wang, H., Li, Z., Feng, L., & Zhang, W. (2022a). Vim: Out-of-distribution with virtual-logit matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.

  • Wang, H., Liu, W., Bocchieri, A., & Li, Y. (2021). Can multi-label classification networks know what they don’t know? NeurIPS, 34, 29074–29087.

    Google Scholar 

  • Wang, H., Wu, X., Huang, Z., & Xing, E. P. (2020). High-frequency component helps explain the generalization of convolutional neural networks. In CVPR.

  • Wang, M., & Deng, W. (2018). Deep visual domain adaptation: A survey. Neurocomputing, 312, 135–153.

    Google Scholar 

  • Wang, Q., Fang, Z., Zhang, Y., Liu, F., Li, Y., & Han, B. (2023b). Learning to augment distributions for out-of-distribution detection. Advances in Neural Information Processing Systems, 36, 66.

  • Wang, Q., Liu, F., Zhang, Y., Zhang, J., Gong, C., Liu, T., & Han, B. (2022b). Watermarking for out-of-distribution detection. In NeurIPS.

  • Wang, Q., Ye, J., Liu, F., Dai, Q., Kalander, M., Liu, T., Hao, J., & Han, B. (2023c). Out-of-distribution detection with implicit outlier transformation.

  • Wang, W., Zheng, V. W., Yu, H., & Miao, C. (2019b). A survey of zero-shot learning: Settings, methods, and applications. In TIST.

  • Wang, Y., Li, B., Che, T., Zhou, K., Liu, Z., & Li, D. (2021). Energy-based open-world uncertainty modeling for confidence calibration. In ICCV.

  • Wang, Y., Liu, W., Ma, X., Bailey, J., Zha, H., Song, L., & Xia, S.-T. (2018). Iterative learning with open-set noisy labels. In CVPR.

  • Wei, H., Xie, R., Cheng, H., Feng, L., An, B., & Li, Y. (2022). Mitigating neural network overconfidence with logit normalization. In ICML.

  • Welling, M., & Teh, Y. W. (2011). Bayesian learning via stochastic gradient Langevin dynamics. In ICML.

  • Wen, D., Han, H., & Jain, A. K. (2015). Face spoof detection with image distortion analysis. IEEE Transactions on Information Forensics and Security, 10, 746–761.

    Google Scholar 

  • Wenzel, F., Roth, K., Veeling, B. S., Światkowski, J., Tran, L., Mandt, S., Snoek, J., Salimans, T., Jenatton, R., & Nowozin, S. (2020). How good is the Bayes posterior in deep neural networks really? In ICML.

  • Wettschereck, D. (1994). A study of distance-based machine learning algorithms.

  • Wikipedia contributors. (2021). Outlier from Wikipedia, the free encyclopedia. Retrieved August 12, 2021

  • Wu, X., Lu, J., Fang, Z., & Zhang, G. (2023). Meta ood learning for continuously adaptive ood detection. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 19353–19364).

  • Wu, Z.-F., Wei, T., Jiang, J., Mao, C., Tang, M., & Li, Y.-F. (2021). Ngc: A unified framework for learning with open-world noisy data. In ICCV.

  • Xia, Y., Cao, X., Wen, F., Hua, G., & Sun, J. (2015). Learning discriminative reconstructions for unsupervised outlier removal. In CVPR.

  • Xiao, T., Zhang, C., & Zha, H. (2015). Learning to detect anomalies in surveillance video. IEEE Signal Processing Letters, 22, 1477–1481.

    Google Scholar 

  • Xiao, Y., Wang, H., Xu, W., & Zhou, J. (2013). L1 norm based kpca for novelty detection. Pattern Recognition, 46, 389–396.

    Google Scholar 

  • Xiao, Z., Yan, Q., & Amit, Y. (2020). Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In NeurIPS.

  • Xie, M., Hu, J., & Tian, B. (2012). Histogram-based online anomaly detection in hierarchical wireless sensor networks. In ICTSPCC.

  • Xu, H., Liu, B., Shu, L., & Yu, P. (2019). Open-world learning and application to product classification. In WWW.

  • Yan, X., Zhang, H., Xu, X., Hu, X., & Heng, P.-A. (2021). Learning semantic context from normal samples for unsupervised anomaly detection. In AAAI.

  • Yang, J., Chen, W., Feng, L., Yan, X., Zheng, H., & Zhang, W. (2020a). Webly supervised image classification with metadata: Automatic noisy label correction via visual-semantic graph. In ACM multimedia.

  • Yang, J., Feng, L., Chen, W., Yan, X., Zheng, H., Luo, P., & Zhang, W. (2020b). Webly supervised image classification with self-contained confidence. In ECCV.

  • Yang, J., Wang, H., Feng, L., Yan, X., Zheng, H., Zhang, W., & Liu, Z. (2021). Semantically coherent out-of-distribution detection. In ICCV.

  • Yang, J., Wang, P., Zou, D., Zhou, Z., Ding, K., Peng, W., Wang, H., Chen, G., Li, B., Sun, Y., Du, X., Zhou, K., Zhang, W., Hendrycks, D., Li, Y., & Liu, Z. (2022a). Openood: Benchmarking generalized out-of-distribution detection. In NeurIPS.

  • Yang, J., Zhou, K., & Liu, Z. (2022b). Full-spectrum out-of-distribution detection, arXiv preprint arXiv:2204.05306

  • Yang, P., Baracchi, D., Ni, R., Zhao, Y., Argenti, F., & Piva, A. (2020c). A survey of deep learning-based source image forensics. Journal of Imaging, 6, 66.

  • Yang, X., Latecki, L. J., & Pokrajac, D. (2009). Outlier detection with globally optimal exemplar-based gmm. In SIAM.

  • Yang, Y., Gao, R., & Xu, Q. (2022c). Out-of-distribution detection with semantic mismatch under masking. In ECCV.

  • Yang, Z., Li, L., Lin, K., Wang, J., Lin, C.-C., Liu, Z., & Wang, L. (2023). The dawn of lmms Preliminary explorations with gpt-4v (ision). arXiv preprint, arXiv:2309.17421

  • Yoshihashi, R., Shao, W., Kawakami, R., You, S., Iida, M., & Naemura, T. (2019). Classification-reconstruction learning for open-set recognition. In CVPR.

  • Yu, Q., & Aizawa, K. (2019). Unsupervised out-of-distribution detection by maximum classifier discrepancy. In ICCV.

  • Yue, Z., Wang, T., Sun, Q., Hua, X.-S., & Zhang, H. (2021). Counterfactual zero-shot and open-set visual recognition. In CVPR.

  • Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., & Yoo, Y. (2019). Cutmix: Regularization strategy to train strong classifiers with localizable features. In CVPR.

  • Zaeemzadeh, A., Bisagno, N., Sambugaro, Z., Conci, N., Rahnavard, N., & Shah, M. (2021). Out-of-distribution detection using union of 1-dimensional subspaces. In CVPR.

  • Zenati, H., Foo, C. S., Lecouat, B., Manek, G., & Chandrasekhar, V. R. (2018). Efficient gan-based anomaly detection. In ICLR-W.

  • Zhai, S., Cheng, Y., Lu, W., & Zhang, Z. (2016). Deep structured energy based models for anomaly detection. In ICML.

  • Zhang, B., & Zuo, W. (2008). Learning from positive and unlabeled examples: A survey. In International symposiums on information processing.

  • Zhang, H., Li, A., Guo, J., & Guo, Y. (2020). Hybrid models for open set recognition. In ECCV.

  • Zhang, H., & Patel, V. M. (2016). Sparse representation-based open set recognition. In TPAMI.

  • Zhang, J., Fu, Q., Chen, X., Du, L., Li, Z., Wang, G., Han, S., & Zhang, D. (2023a). Out-of-distribution detection based on in-distribution data patterns memorization with modern Hopfield energy. In ICLR.

  • Zhang, J., Inkawhich, N., Linderman, R., Chen, Y., & Li, H. (2023b). Mixture outlier exposure: Towards out-of-distribution detection in fine-grained environments. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (WACV) (pp. 5531–5540).

  • Zhang, J., Yang, J., Wang, P., Wang, H., Lin, Y., Zhang, H., Sun, Y., Du, X., Zhou, K., Zhang, W., Li, Y., Liu, Z., Chen, Y., & Li, H. (2023c). Openood v1.5: Enhanced benchmark for out-of-distribution detection. arXiv preprint, arXiv:2306.09301

  • Zhang, L., Goldstein, M., & Ranganath, R. (2021). Understanding failures in out-of-distribution detection with deep generative models. In ICML.

  • Zhao, B., & Han, K. (2021). Novel visual category discovery with dual ranking statistics and mutual knowledge distillation. In NeurIPS.

  • Zheng, H., Wang, Q., Fang, Z., Xia, X., Liu, F., Liu, T., & Han, B. (2023). Out-of-distribution detection learning with unreliable out-of-distribution sources. In NeurIPS.

  • Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A. (2017). Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 1452–1464.

    Google Scholar 

  • Zhou, C., Neubig, G., Gu, J., Diab, M., Guzman, P., Zettlemoyer, L., & Ghazvininejad, M. (2020). Detecting hallucinated content in conditional neural sequence generation. In ACL.

  • Zhou, D.-W., Ye, H.-J., & Zhan, D.-C. (2021a). Learning placeholders for open-set recognition. In CVPR.

  • Zhou, K., Liu, Z., Qiao, Y., Xiang, T., & Loy, C. C. (2021b). Domain generalization: A survey, arXiv preprint arXiv:2103.02503

  • Zhou, K., Yang, J., Loy, C. C., & Liu, Z. (2022a). Learning to prompt for vision-language models. In International Journal of Computer Vision (IJCV).

  • Zhou, K., Yang, J., Loy, C. C., & Liu, Z. (2022b). Conditional prompt learning for vision-language models. In IEEE/CVF conference on computer vision and pattern recognition (CVPR).

  • Zhou, Y. (2022). Rethinking reconstruction autoencoder-based out-of-distribution detection. In CVPR.

  • Zimmerer, D., Full, P. M., Isensee, F., Jäger, P., Adler, T., Petersen, J., Köhler, G., Ross, T., Reinke, A., Kascenas, A., & Jensen, B. S. (2022). Mood 2020: A public benchmark for out-of-distribution detection and localization on medical images. IEEE Transactions on Medical Imaging, 41, 2728–2738.

    Google Scholar 

  • Zisselman, E., & Tamar, A. (2020). Deep residual flow for out of distribution detection. In CVPR.

  • Zong, B., Song, Q., Min, M. R., Cheng, W., Lumezanu, C., Cho, D., & Chen, H. (2018). Deep autoencoding Gaussian mixture model for unsupervised anomaly detection. In ICLR.

Download references

Acknowledgements

This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221-0012), NTU NAP, and under the RIE2020 Industry Alignment Fund—Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). YL is supported by the Office of the Vice Chancellor for Research and Graduate Education (OVCRGE) with funding from the Wisconsin Alumni Research Foundation (WARF).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ziwei Liu.

Additional information

Communicated by Hong Liu.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, J., Zhou, K., Li, Y. et al. Generalized Out-of-Distribution Detection: A Survey. Int J Comput Vis 132, 5635–5662 (2024). https://doi.org/10.1007/s11263-024-02117-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11263-024-02117-4

Keywords