Abstract
This paper addresses the problem of designing reliable prediction models that abstain from predictions when faced with uncertain or out-of-distribution samples - a recently proposed problem known as Selective Classification in the presence of Out-of-Distribution data (SCOD). We make three key contributions to SCOD. Firstly, we demonstrate that the optimal SCOD strategy involves a Bayes classifier for in-distribution (ID) data and a selector represented as a stochastic linear classifier in a 2D space, using i) the conditional risk of the ID classifier, and ii) the likelihood ratio of ID and out-of-distribution (OOD) data as input. This contrasts with suboptimal strategies from current OOD detection methods and the Softmax Information Retaining Combination (SIRC), specifically developed for SCOD. Secondly, we establish that in a distribution-free setting, the SCOD problem is not Probably Approximately Correct learnable when relying solely on an ID data sample. Third, we introduce POSCOD, a simple method for learning a plugin estimate of the optimal SCOD strategy from both an ID data sample and an unlabeled mixture of ID and OOD data. Our empirical results confirm the theoretical findings and demonstrate that our proposed method, POSCOD, outperforms existing OOD methods in effectively addressing the SCOD problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Distribution-free setting implies learning guarantees for any data distribution.
- 2.
We use shortcuts \(\mathcal {Y}^\mathcal {X}=\{h:\mathcal {X}\rightarrow \mathcal {Y}\}\), \([0,1]^\mathcal {X}=\{c:\mathcal {X}\rightarrow [0,1]\}\).
- 3.
The trivial hypothesis space involves a single selector, reducing the SCOD problem to standard prediction under the closed-world assumption - known to be learnable when \(\mathcal{H}\) has finite complexity.
References
Bitterwolf, J., Mueller, M., Hein, M.: In or out? Fixing imagenet out-of-distribution detection evaluation. In: ICML (2023). https://proceedings.mlr.press/v202/bitterwolf23a.html
Blanchard, G., Lee, G., Scott, C.: Semi-supervised novelty detection. J. Mach. Learn. Res. 11, 2973–3009 (2010)
Cen, J., et al.: The devil is in the wrongly-classified samples: towards unified open-set recognition. In: ICLR (2023)
Chen, G., Peng, P., Wang, X., Tian, Y.: Adversarial reciprocal points learning for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 8065–8081 (2022)
Chow, C.: On optimum recognition error and reject tradeoff. IEEE Trans. Inf. Theory 16(1), 41–46 (1970)
Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., Vedaldi, A.: Describing textures in the wild. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3606–3613 (2014). https://doi.org/10.1109/CVPR.2014.461
DeVries, T., Taylor, G.W.: Learning confidence for out-of-distribution detection in neural networks. arXiv preprint arXiv:1802.04865 (2018)
Dhamija, A.R., Günther, M., Boult, T.: Reducing network agnostophobia. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018)
Djurisic, A., Bozanic, N., Ashok, A., Liu, R.: Extremely simple activation shaping for out-of-distribution detection. In: The Eleventh International Conference on Learning Representations (2023). https://openreview.net/forum?id=ndYXTEL6cZz
Fang, Z., Li, Y., Lu, J., Dong, J., Han, B., Liu, F.: Is out-of-distribution detection learnable? In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 37199–37213. Curran Associates, Inc. (2022)
Franc, V., Prusa, D., Voracek, V.: Optimal strategies for reject option classifiers. J. Mach. Learn. Res. 24(11), 1–49 (2023)
Geifman, Y., El-Yaniv, R.: Selective classification for deep neural networks. In: Advances in Neural Information Processing Systems, vol. 30, pp. 4878–4887 (2017)
Granese, F., Romanelli, M., Gorla, D., Palamidessi, C., Piantanida, P.: Doctor: a simple method for detecting misclassification errors. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 5669–5681. Curran Associates, Inc. (2021)
Hendrycks, D., et al.: Scaling out-of-distribution detection for real-world settings. In: Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. (eds.) Proceedings of the 39th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 162, pp. 8759–8773. PMLR (2022)
Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: Proceedings of International Conference on Learning Representations (2017)
Huang, R., Geng, A., Li, Y.: On the importance of gradients for detecting distributional shifts in the wild. In: Advances in Neural Information Processing Systems (2021)
Huang, R., Li, Y.: MOS: towards scaling out-of-distribution detection for large semantic space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8710–8719 (2021)
Katz-Samuels, J., Nakhleh, J., Nowak, R., Li, Y.: Training OOD detectors in their natural habitats. In: ICML (2022)
Kim, J., Koo, J., Hwang, S.: A unified benchmark for the unknown detection capability of deep neural networks. Expert Syst. Appl. 229, 120461 (2021). https://api.semanticscholar.org/CorpusID:244773165
Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report (2009)
Kuznetsova, A., et al.: The open images dataset V4. Int. J. Comput. Vision 128(7), 1956–1981 (2020). https://doi.org/10.1007/s11263-020-01316-z
Le, Y., Yang, X.S.: Tiny imagenet visual recognition challenge (2015)
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791
Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: International Conference on Learning Representations (2018)
Liu, W., Wang, X., Owens, J., Li, Y.: Energy-based out-of-distribution detection. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 21464–21475. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/f5496252609c43eb8a3d147ab9b9c006-Paper.pdf
Malinin, A., Gales, M.: Predictive uncertainty estimation via prior networks. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018)
Narasimhan, H., Krisna Menon, A., Jitkrittum, W., Kumar, S.: Plugin estimators for selective classification with out-of-distribution detection. arXiv preprint arXiv:2301.12386v4 (2023)
Neal, L., Olson, M., Fern, X., Wong, W.K., Li, F.: Open set learning with counterfactual images. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Neal, L., Olson, M., Fern, X., Wong, W.-K., Li, F.: Open set learning with counterfactual images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 620–635. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_38
Neyman, J., Person, E.: On the use and interpretation of certain test criteria for purpose of statistical inference. Biometrica 175–240 (1928)
Pietraszek, T.: Optimizing abstaining classifiers using ROC analysis. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 665–672 (2005)
Ridnik, T., Ben-Baruch, E., Noy, A., Zelnik-Manor, L.: Imagenet-21k pretraining for the masses. In: Thirty-Fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1) (2021). https://openreview.net/forum?id=Zkj_VcZ6ol
Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, Cambridge (2014)
Song, Y., Sebe, N., Wang, W.: Rankfeat: rank-1 feature removal for out-of-distribution detection. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 17885–17898. Curran Associates, Inc. (2022)
Sun, Y., Guo, C., Li, Y.: React: out-of-distribution detection with rectified activations. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 144–157. Curran Associates, Inc. (2021)
Sun, Y., Ming, Y., Zhu, X., Li, Y.: Out-of-distribution detection with deep nearest neighbors. In: Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. (eds.) Proceedings of the 39th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 162, pp. 20827–20840. PMLR (2022)
TorchVision maintainers and contributors: Torchvision: Pytorch’s computer vision library. GitHub repository (2016). https://github.com/pytorch/vision
Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 30(11), 1958–1970 (2008). https://doi.org/10.1109/TPAMI.2008.128
Van Horn, G., et al.: The inaturalist species classification and detection dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Vaze, S., Han, K., Vedaldi, A., Zisserman, A.: Open-set recognition: a good closed-set classifier is all you need? In: International Conference on Learning Representations (2022)
Wang, H., Li, Z., Feng, L., Zhang, W.: Vim: out-of-distribution with virtual-logit matching. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4911–4920 (2022)
Xia, G., Bouganis, C.S.: Augmenting softmax information for selective classification with out-of-distribution data. In: Proceedings of the Asian Conference on Computer Vision (ACCV), pp. 1995–2012 (2022)
Yang, J., et al.: Openood: benchmarking generalized out-of-distribution detection. In: Conference on Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and Benchmar (2022)
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2017)
Acknowledgments
The authors acknowledge support for this work from the CTU institutional support (Future Fund).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Franc, V., Paplham, J., Prusa, D. (2025). SCOD: From Heuristics to Theory. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15142. Springer, Cham. https://doi.org/10.1007/978-3-031-72907-2_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-72907-2_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72906-5
Online ISBN: 978-3-031-72907-2
eBook Packages: Computer ScienceComputer Science (R0)