[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
short-survey

Towards open-world recognition: : Critical problems and challenges

Published: 20 February 2025 Publication History

Abstract

With the emergence of rich classification models and high computing power, recognition systems are widely used in various fields. Unfortunately, as the scale of open systems increases, the assumption of a closed-world will lead to the failure of fragile models because almost all machine learning based recognition algorithms are evaluated as implicit “closed-set”. Comparing with classical methods, open-world learning can address concerns in dynamic environments where the input data (size, category, etc.) is changing rapidly. Nevertheless, there still lacks a thorough review of recent advances in open-world recognition. Therefore, we provide an in-depth discussion of open-world recognition based on some recent works. First, we propose a learning framework for open-world recognition, and analyze the challenges from three aspects: domain shift,limits on the amount of labeled data and perception scene with dynamic changes. Secondly, we evaluate the current state of the art, summarize the intersection of various methods and find out some existing problems. Finally, we discuss the limitations of current procedures and new technologies as well as future directions in order to make meaningful progress. This article will help researchers understand Open-world learning and the possibilities of extending research into appropriate areas.

References

[1]
Achituve, I., Maron, H., Chechik, G., 2021. Self-supervised learning for domain adaptation on point clouds. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. WACV, pp. 123–133.
[2]
Akada, H., Bhat, S.F., Alhashim, I., Wonka, P., 2022. Self-supervised learning of domain invariant features for depth estimation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. WACV, pp. 3377–3387.
[3]
R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, T. Tuytelaars, Memory aware synapses: Learning what (not) to forget, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 144–161,.
[4]
A. Antoniou, A. Storkey, Assume, augment and learn: Unsupervised few-shot meta-learning via random labels and data augmentation, 2019, arXiv preprint arXiv:1902.09884.
[5]
H. Bao, L. Dong, F. Wei, Beit: Bert pre-training of image transformers, 2021, arXiv preprint arXiv:2106.08254.
[6]
A. Bardes, J. Ponce, Y. LeCun, Vicreg: Variance-invariance–covariance regularization for self-supervised learning, in: Proceedings of the International Conference on Learning Representations, ICLR, 2022, URL https://arxiv.org/abs/2105.04906.
[7]
E. Belouadah, A. Popescu, Scail: Classifier weights scaling for class incremental learning, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, WACV, 2020, pp. 1251–1260,.
[8]
E. Belouadah, A. Popescu, I. Kanellos, A comprehensive study of class incremental learning algorithms for visual tasks, Neural Netw. 135 (2021) 38–54.
[9]
A. Bendale, T. Boult, Towards open world recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2015, pp. 1893–1902,.
[10]
A. Bendale, T.E. Boult, Towards open set deep networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2016, pp. 1563–1572,.
[11]
Bendale, A., Boult, T.E., 2016b. Towards open set deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1563–1572.
[12]
D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, C.A. Raffel, Mixmatch: A holistic approach to semi-supervised learning, Adv. Neural Inf. Process. Syst. 32 (2019).
[13]
S. Bucci, M.R. Loghmani, T. Tommasi, On the effectiveness of image rotation for open set domain adaptation, in: European Conference on Computer Vision, ECCV, Springer, 2020, pp. 422–438,.
[14]
P.P. Busto, J. Gall, Open set domain adaptation, in: Proceedings of the IEEE International Conference on Computer Vision, ICCV, 2017, pp. 754–763,.
[15]
Cabannes, V., Kiani, B., Balestriero, R., LeCun, Y., Bietti, A., 2023. The ssl interplay: Augmentations, inductive bias, and generalization. In: International Conference on Machine Learning, PMLR. pp. 3252–3298.
[16]
Q. Cai, Y. Pan, T. Yao, C. Yan, T. Mei, Memory matching networks for one-shot image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018, pp. 4080–4088,.
[17]
F.M. Castro, M.J. Marın-Jiménez, N. Guil, C. Schmid, K. Alahari, End-to-end incremental learning, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 233–248,.
[18]
D. Chang, A. Sain, Z. Ma, Y.Z. Song, J. Guo, Mind the gap: Enlarging the domain gap in open set domain adaptation, 2020, arXiv preprint arXiv:2003.03787.
[19]
X. Chao, L. Zhang, Few-shot imbalanced classification based on data augmentation, Multimedia Syst. (2021) 1–9.
[20]
A. Chaudhry, M. Ranzato, M. Rohrbach, M. Elhoseiny, Efficient lifelong learning with a-gem, 2018, arXiv preprint arXiv:1812.00420.
[21]
A. Chefrour, Incremental supervised learning: algorithms and applications in pattern recognition, Evol. Intell. 12 (2019) 97–112.
[22]
R. Chen, G. Chen, X. Liao, W. Xiong, Class-incremental learning via prototype similarity replay and similarity-adjusted regularization, Appl. Intell. (2024) 1–16.
[23]
X. Chen, M. Ding, X. Wang, Y. Xin, S. Mo, Y. Wang, S. Han, P. Luo, G. Zeng, J. Wang, Context autoencoder for self-supervised representation learning, 2022, arXiv preprint arXiv:2202.03026.
[24]
X. Chen, K. He, Exploring simple siamese representation learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021, pp. 15750–15758,.
[25]
T. Chen, S. Kornblith, M. Norouzi, G. Hinton, A simple framework for contrastive learning of visual representations, in: Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 2020, pp. 1597–1607. URL https://proceedings.mlr.press/v119/chen20j.html.
[26]
T. Chen, S. Kornblith, K. Swersky, M. Norouzi, G.E. Hinton, Big self-supervised models are strong semi-supervised learners, Adv. Neural Inf. Process. Syst. 33 (2020) 22243–22255.
[27]
T. Chen, X. Zhai, M. Ritter, M. Lucic, N. Houlsby, Self-supervised gans via auxiliary rotation loss, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2019, pp. 12154–12163,.
[28]
Y. Choi, M. El-Khamy, J. Lee, Dual-teacher class-incremental learning with data-free generative replay, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021, pp. 3543–3552,.
[29]
C. Cortes, Support-vector networks, Mach. Learn. (1995).
[30]
G. Csurka, Domain adaptation for visual applications: A comprehensive survey, 2017, arXiv preprint arXiv:1702.05374.
[31]
J. Devlin, M.W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, 2018, arXiv preprint arXiv:1810.04805.
[32]
A.R. Dhamija, T. Ahmad, J. Schwan, M. Jafarzadeh, C. Li, T.E. Boult, Self-supervised features improve open-world learning, 2021, arXiv preprint arXiv:2102.07848.
[33]
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, An image is worth 16x16 words: Transformers for image recognition at scale, 2020, arXiv preprint arXiv:2010.11929.
[34]
A. Douillard, M. Cord, C. Ollion, T. Robert, E. Valle, Podnet: Pooled outputs distillation for small-tasks incremental learning, in: Proceedings of the European Conference on Computer Vision, ECCV, Springer, 2020, pp. 86–102,.
[35]
F.G. Febrinanto, F. Xia, K. Moore, C. Thapa, C. Aggarwal, Graph lifelong learning: A survey, IEEE Comput. Intell. Mag. 18 (2023) 32–51.
[36]
Z. Feng, C. Xu, D. Tao, Self-supervised representation learning by rotation feature decoupling, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2019, pp. 10364–10374,.
[37]
C. Fernando, Evolution channels gradient descent in super neural networks, 2017, pp. 1–16. arXiv preprint arXiv:1701.08734.
[38]
Fini, E., Sangineto, E., Lathuilière, S., Zhong, Z., Nabi, M., Ricci, E., 2021. A unified objective for novel class discovery. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9284–9292.
[39]
Finn, C., Abbeel, P., Levine, S., 2017a. Model-agnostic meta-learning for fast adaptation of deep networks. In: Precup, D., Teh, Y.W. (Eds.), Proceedings of the 34th International Conference on Machine Learning (ICML), PMLR. pp. 1126–1135.
[40]
Finn, C., Abbeel, P., Levine, S., 2017b. Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, PMLR. pp. 1126–1135.
[41]
J. Gao, T. Xie, R. Li, K. Wang, L. Zhao, Apm: Adaptive parameter multiplexing for class incremental learning, Expert Syst. Appl. 258 (2024).
[42]
Garrido, Q., Balestriero, R., Najman, L., Lecun, Y., 2023. Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank. In: International Conference on Machine Learning, PMLR. pp. 10929–10974.
[43]
Z. Ge, S. Demyanov, Z. Chen, R. Garnavi, Generative openmax for multi-class open set classification, 2017, arXiv preprint arXiv:1707.07418.
[44]
C. Geng, S.j. Huang, S. Chen, Recent advances in open set recognition: A survey, IEEE Trans. Pattern Anal. Mach. Intell. 43 (2020) 3614–3631.
[45]
D. Goswami, Y. Liu, B. Twardowski, J. van de Weijer, Fecam: Exploiting the heterogeneity of class distributions in exemplar-free continual learning, Adv. Neural Inf. Process. Syst. 36 (2024).
[46]
A. Graves, G. Wayne, I. Danihelka, Neural turing machines, 2014, arXiv preprint arXiv:1410.5401.
[47]
J.B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar, Bootstrap your own latent-a new approach to self-supervised learning, Adv. Neural Inf. Process. Syst. 33 (2020) 21271–21284.
[48]
S. Grossberg, How does a brain build a cognitive code?, in: Grossberg S. (Ed.), Studies of Mind and Brain, Springer, 1982, pp. 1–52.
[49]
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A.C. Courville, Improved training of wasserstein gans, Adv. Neural Inf. Process. Syst. 30 (2017).
[50]
M. Gunther, S. Cruz, E.M. Rudd, T.E. Boult, Toward open-set face recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW, 2017, pp. 71–80,.
[51]
Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, M.S. Lew, Deep learning for visual understanding: A review, Neurocomputing 187 (2016) 27–48.
[52]
J. Guo, H. Wang, Y. Xu, W. Xu, Y. Zhan, Y. Sun, S. Guo, Multimodal dual-embedding networks for malware open-set recognition, IEEE Trans. Neural Netw. Learn. Syst. (2024).
[53]
Han, K., Vedaldi, A., Zisserman, A., 2019. Learning to discover novel visual categories via deep transfer clustering. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 8401–8409.
[54]
Hao, Y., Fu, Y., Jiang, Y.G., Tian, Q., An end-to-end architecture for class-incremental object detection with knowledge distillation. In: 2019 IEEE International Conference on Multimedia and Expo. ICME, IEEE, pp. 1–6.
[55]
D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick, Neuroscience-inspired artificial intelligence, Neuron 95 (2017) 245–258.
[56]
T. Hassan, B. Hassan, M.U. Akram, S. Hashmi, A.H. Taguri, N. Werghi, Incremental cross-domain adaptation for robust retinopathy screening via bayesian deep learning, IEEE Trans. Instrum. Meas. 70 (2021) 1–14.
[57]
He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R., Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 16000–16009.
[58]
K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, Momentum contrast for unsupervised visual representation learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2020, pp. 9729–9738,.
[59]
D. Hendrycks, K. Gimpel, A baseline for detecting misclassified and out-of-distribution examples in neural networks, 2016, arXiv preprint arXiv:1610.02136.
[60]
S. Hou, X. Pan, C.C. Loy, Z. Wang, D. Lin, Learning a unified classifier incrementally via rebalancing, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2019, pp. 831–839,.
[61]
Y.C. Hsu, Z. Lv, Z. Kira, Learning to cluster in order to transfer across domains and tasks, 2017, arXiv preprint arXiv:1711.10125.
[62]
T. Hu, T. Tang, R. Lin, M. Chen, S. Han, J. Wu, A simple data augmentation algorithm and a self-adaptive convolutional architecture for few-shot fault diagnosis under different working conditions, Measurement 156 (2020).
[63]
Y. Huang, B. Hu, Y. Zhang, C. Gao, Q. Wang, A semi-supervised cross-modal memory bank for cross-modal retrieval, Neurocomputing 579 (2024).
[64]
L.P. Jain, W.J. Scheirer, T.E. Boult, Multi-class open set recognition using probability of inclusion, in: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (Eds.), Computer Vision – ECCV 2014, Springer, 2014, pp. 393–409,.
[65]
L.P. Jain, W.J. Scheirer, T.E. Boult, Multi-class open set recognition using probability of inclusion, in: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September (2014) 6-12, Proceedings, Part III 13, Springer, 2014, pp. 393–409.
[66]
A. Jaiswal, A.R. Babu, M.Z. Zadeh, D. Banerjee, F. Makedon, A survey on contrastive self-supervised learning, Technologies 9 (2020) 2.
[67]
X. Ji, J. Henriques, T. Tuytelaars, A. Vedaldi, Automatic recall machines: Internal replay, continual learning and the brain, 2020, arXiv preprint arXiv:2006.12323.
[68]
L. Jing, Y. Tian, Self-supervised visual feature learning with deep neural networks: A survey, IEEE Trans. Pattern Anal. Mach. Intell. 43 (2020) 4037–4058.
[69]
K.J. Joseph, S. Khan, F.S. Khan, V.N. Balasubramanian, Towards open world object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021, pp. 5830–5840,.
[70]
Å. Kaiser, O. Nachum, A. Roy, S. Bengio, Learning to remember rare events, 2017, arXiv preprint arXiv:1703.03129.
[71]
P. Khodaee, H.L. Viktor, W. Michalowski, Knowledge transfer in lifelong machine learning: a systematic literature review, Artif. Intell. Rev. 57 (2024) 217.
[72]
Kim, S., Kim, H.I., Ro, Y.M., 2024. Improving open set recognition via visual prompts distilled from common-sense knowledge. In: Proceedings of the AAAI Conference on Artificial Intelligence. pp. 2786–2794.
[73]
D.P. Kingma, M. Welling, Auto-encoding variational bayes, 2013, arXiv preprint arXiv:1312.6114.
[74]
Koch, T., Riess, C., Köhler, T., 2023. Lord: Leveraging open-set recognition with unknown data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4386–4396.
[75]
Koch, G., Zemel, R., Salakhutdinov, R., 2015. Siamese neural networks for one-shot image recognition. In: Proceedings of the ICML Deep Learning Workshop. Lille.
[76]
Z. Kuang, J. Wang, D. Sun, J. Zhao, L. Shi, X. Xiong, Incremental attribute learning by knowledge distillation method, J. Comput. Des. Eng. (2024) qwae083.
[77]
V. Kumar, H. Glaude, C. de Lichy, W. Campbell, A closer look at feature space data augmentation for few-shot intent classification, 2019, arXiv preprint arXiv:1910.04176.
[78]
Kundu, J.N., Venkat, N., V, Babu, R.V., 2020a. Universal source-free domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, pp. 4544–4553.
[79]
J.N. Kundu, R.M. Venkatesh, N. Venkat, A. Revanur, R.V. Babu, Class-incremental domain adaptation, in: Vedaldi A., Bischof H., Brox T., Frahm J.M. (Eds.), Computer Vision – ECCV 2020, Springer, 2020, pp. 53–69,.
[80]
Lang, N., Snæbjarnarson, V., Cole, E., Mac Aodha, O., Igel, C., Belongie, S., 2024. From coarse to fine-grained open-set recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 17804–17814.
[81]
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, Photo-realistic single image super-resolution using a generative adversarial network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2017, pp. 4681–4690,.
[82]
K. Lee, K. Lee, H. Lee, J. Shin, A simple unified framework for detecting out-of-distribution samples and adversarial attacks, Adv. Neural Inf. Process. Syst. 31 (2018).
[83]
Q. Leng, M. Ye, Q. Tian, A survey of open-world person re-identification, IEEE Trans. Circuits Syst. Video Technol. 30 (2019) 1092–1108.
[84]
Z. Li, D. Hoiem, Learning without forgetting, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2017) 2935–2947.
[85]
S. Li, P. Kou, M. Ma, H. Yang, S. Huang, Z. Yang, Application of semi-supervised learning in image classification: Research on fusion of labeled and unlabeled data, IEEE Access (2024).
[86]
X. Li, A. Wu, W.S. Zheng, Adversarial open-world person re-identification, in: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (Eds.), Computer Vision – ECCV 2018, Springer, 2018, pp. 280–296,.
[87]
K. Li, Y. Zhang, K. Li, Y. Fu, Adversarial feature hallucination networks for few-shot learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2020, pp. 13470–13479,.
[88]
S. Liang, Y. Li, R. Srikant, Enhancing the reliability of out-of-distribution image detection in neural networks, 2017, arXiv preprint arXiv:1706.02690.
[89]
G. Litjens, T. Kooi, B.E. Bejnordi, A.A.A. Setio, F. Ciompi, M. Ghafoorian, J.A. Van Der Laak, B. Van Ginneken, C.I. Sánchez, A survey on deep learning in medical image analysis, Med. Image Anal. 42 (2017) 60–88.
[90]
H. Liu, Z. Cao, M. Long, J. Wang, Q. Yang, Separate to adapt: Open set domain adaptation via progressive separation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2019, pp. 2927–2936,.
[91]
Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, 2021, pp. 10012–10022,.
[92]
Z. Liu, Z. Miao, X. Zhan, J. Wang, B. Gong, S.X. Yu, Large-scale long-tailed recognition in an open world, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2019, pp. 2537–2546,.
[93]
Y. Liu, B. Schiele, Q. Sun, Rmm: Reinforced memory management for class-incremental learning, Adv. Neural Inf. Process. Syst. 34 (2021) 3478–3490.
[94]
Liu, Y., Su, Y., Liu, A.A., Schiele, B., Sun, Q., 2020. Mnemonics training: Multi-class incremental learning without forgetting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12245–12254.
[95]
D. Lopez-Paz, M. Ranzato, Gradient episodic memory for continual learning, Adv. Neural Inf. Process. Syst. 30 (2017).
[96]
V. Losing, B. Hammer, H. Wersing, Incremental on-line learning: A review and comparison of state of the art algorithms, Neurocomputing 275 (2018) 1261–1274.
[97]
Y. Luo, L. Yin, W. Bai, K. Mao, An appraisal of incremental learning methods, Entropy 22 (2020) 1190.
[98]
S. Madhavan, N. Kumar, Incremental methods in face recognition: a survey, Artif. Intell. Rev. 54 (2021) 253–303.
[99]
A. Mallya, D. Davis, S. Lazebnik, Piggyback: Adapting a single network to multiple tasks by learning to mask weights, in: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (Eds.), Computer Vision – ECCV 2018, Springer, 2018, pp. 67–82,.
[100]
G. Mariani, F. Scheidegger, R. Istrate, C. Bekas, C. Malossi, Bagan: Data augmentation with balancing gan, 2018, arXiv preprint arXiv:1803.09655.
[101]
Matsuura, T., Harada, T., 2020. Domain generalization using a mixture of multiple latent domains. In: Proceedings of the AAAI Conference on Artificial Intelligence. pp. 11749–11756.
[102]
M. McCloskey, N.J. Cohen, Catastrophic interference in connectionist networks: The sequential learning problem, in: Psychology of Learning and Motivation, vol. 24, Academic Press, 1989, pp. 109–165,.
[103]
M. Mirza, S. Osindero, Conditional generative adversarial nets, 2014, arXiv preprint arXiv:1411.1784.
[104]
I. Misra, L. van der Maaten, Self-supervised learning of pretext-invariant representations, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2020, pp. 6707–6717,.
[105]
X. Mu, K.M. Ting, Z.H. Zhou, Classification under streaming emerging new classes: A solution using completely-random trees, IEEE Trans. Knowl. Data Eng. 29 (2017) 1605–1618.
[106]
Mu, X., Zhu, F., Du, J., Lim, E.P., Zhou, Z.H., 2017b. Streaming classification with emerging new class by class matrix sketching. In: Proceedings of the AAAI Conference on Artificial Intelligence.
[107]
M. Mundt, Y. Hong, I. Pliushch, V. Ramesh, A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning, Neural Netw. 160 (2023) 306–336.
[108]
F. Munir, S. Azam, M. Jeon, Sstn: Self-supervised domain adaptation thermal object detection for autonomous driving, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2021, pp. 206–213.
[109]
T. Munkhdalai, H. Yu, Meta networks, in: Precup D., Teh Y.W. (Eds.), Proceedings of the 34th International Conference on Machine Learning (ICML), PMLR, 2017, pp. 2554–2563.
[110]
A. Odena, C. Olah, J. Shlens, Conditional image synthesis with auxiliary classifier gans, in: Precup D., Teh Y.W. (Eds.), Proceedings of the 34th International Conference on Machine Learning (ICML), PMLR, 2017, pp. 2642–2651,.
[111]
S.J. Pan, Q. Yang, A survey on transfer learning, IEEE Trans. Knowl. Data Eng. 22 (2009) 1345–1359.
[112]
G.I. Parisi, R. Kemker, J.L. Part, C. Kanan, S. Wermter, Continual lifelong learning with neural networks: A review, Neural Netw. 113 (2019) 54–71.
[113]
D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, A.A. Efros, Context encoders: Feature learning by inpainting, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2016, pp. 2536–2544,.
[114]
Perera, P., Morariu, V.I., Jain, R., Manjunatha, V., Wigington, C., Ordonez, V., Patel, V.M., 2020. Generative-discriminative feature representations for open-set recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11814–11823.
[115]
Petit, G., Popescu, A., Schindler, H., Picard, D., Delezoide, B., 2023. Fetril: Feature translation for exemplar-free class-incremental learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 3911–3920.
[116]
M.S. Ramanagopal, C. Anderson, R. Vasudevan, M. Johnson-Roberson, Failing to learn: Autonomously identifying perception failures for self-driving cars, IEEE Robot. Autom. Lett. 3 (2018) 3860–3867.
[117]
V. Rani, S.T. Nabi, M. Kumar, A. Mittal, K. Kumar, Self-supervised learning: A succinct review, Arch. Comput. Methods Eng. 30 (2023) 2761–2775.
[118]
S.A. Rebuffi, A. Kolesnikov, G. Sperl, C.H. Lampert, Icarl: Incremental classifier and representation learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2017, pp. 2001–2010,.
[119]
Z. Ren, Y.J. Lee, Cross-domain self-supervised multi-task feature learning using synthetic imagery, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018, pp. 762–771,.
[120]
T. Ren, S. Liu, A. Zeng, J. Lin, K. Li, H. Cao, J. Chen, X. Huang, Y. Chen, F. Yan, et al., Grounded sam: Assembling open-world models for diverse visual tasks, 2024, arXiv preprint arXiv:2401.14159.
[121]
Ristin, M., Guillaumin, M., Gall, J., Van Gool, L., 2014. Incremental learning of ncm forests for large-scale image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3654–3661.
[122]
J. Ryu, J. Bae, J. Lim, Collaborative training of balanced random forests for open set domain adaptation, 2020, arXiv preprint arXiv:2002.03642.
[123]
K. Saito, S. Yamamoto, Y. Ushiku, T. Harada, Open set domain adaptation by backpropagation, in: Ferrari V., Hebert M., Sminchisescu C., Weiss Y. (Eds.), Computer Vision – ECCV 2018, Springer, 2018, pp. 153–168,.
[124]
Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., Lillicrap, T., 2016. Meta-learning with memory-augmented neural networks. In: Balcan, M.F., Weinberger, K.Q. (Eds.), Proceedings of the 33rd International Conference on Machine Learning (ICML), PMLR. New York, New York, USA, pp. 1842–1850. https://doi.org/10.5555/3045390.3045585.
[125]
W.J. Scheirer, L.P. Jain, T.E. Boult, Probability models for open set recognition, IEEE Trans. Pattern Anal. Mach. Intell. 36 (2014) 2317–2324.
[126]
W.J. Scheirer, A. de Rezende Rocha, A. Sapkota, T.E. Boult, Toward open set recognition, IEEE Trans. Pattern Anal. Mach. Intell. 35 (2012) 1757–1772.
[127]
B. Schölkopf, J.C. Platt, J. Shawe-Taylor, A.J. Smola, R.C. Williamson, Estimating the support of a high-dimensional distribution, Neural Comput. 13 (2001) 1443–1471.
[128]
V. Sehwag, A.N. Bhagoji, L. Song, C. Sitawarin, D. Cullina, M. Chiang, P. Mittal, Analyzing the robustness of open-world machine learning, in: Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security (AISec), 2019, pp. 105–116,.
[129]
W. Shiao, Z. Guo, T. Zhao, E.E. Papalexakis, Y. Liu, N. Shah, Link prediction with non-contrastive learning, 2023, arXiv preprint arXiv:2211.14394 ICLR 2023. 19 pages, 6 figures.
[130]
H. Shimodaira, Improving predictive inference under covariate shift by weighting the log-likelihood function, J. Statist. Plann. Inference 90 (2000) 227–244.
[131]
H. Shin, J.K. Lee, J. Kim, J. Kim, Continual learning with deep generative replay, Adv. Neural Inf. Process. Syst. 30 (2017).
[132]
L. Shu, H. Xu, B. Liu, Doc: Deep open classification of text documents, 2017, arXiv preprint arXiv:1709.08716.
[133]
P. Singh, P. Mazumder, P. Rai, V.P. Namboodiri, Rectification-based knowledge retention for continual learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021, pp. 15282–15291,.
[134]
J. Smith, Y.C. Hsu, J. Balloch, Y. Shen, H. Jin, Z. Kira, Always be dreaming: A new approach for data-free class-incremental learning, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, 2021, pp. 9374–9384,.
[135]
J. Snell, K. Swersky, R. Zemel, Prototypical networks for few-shot learning, Adv. Neural Inf. Process. Syst. 30 (2017).
[136]
F. Sung, Y. Yang, L. Zhang, T. Xiang, P.H.S. Torr, T.M. Hospedales, Learning to compare: Relation network for few-shot learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018, pp. 1199–1208,.
[137]
J. Tack, S. Mo, J. Jeong, J. Shin, Csi: Novelty detection via contrastive learning on distributionally shifted instances, Adv. Neural Inf. Process. Syst. 33 (2020) 11839–11852.
[138]
F.H.K.d. S. Tanaka, C. Aranha, Data augmentation using gans, 2019, arXiv preprint arXiv:1904.09135.
[139]
X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, Y. Gong, Few-shot class-incremental learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2020, pp. 12183–12192,.
[140]
H.Y. Tseng, H.Y. Lee, J.B. Huang, M.H. Yang, Cross-domain few-shot classification via learned feature-wise transformation, 2020, arXiv preprint arXiv:2001.08735.
[141]
van den Oord, A., Kalchbrenner, N., Kavukcuoglu, K., 2016. Pixel recurrent neural networks. In: Proceedings of the 33rd International Conference on Machine Learning (ICML), PMLR. pp. 1747–1756.
[142]
S. Vaze, K. Han, A. Vedaldi, A. Zisserman, Open-set recognition: A good closed-set classifier is all you need?, 2021.
[143]
O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, Matching networks for one shot learning, Adv. Neural Inf. Process. Syst. 29 (2016).
[144]
M. Wang, W. Deng, Deep visual domain adaptation: A survey, Neurocomputing 312 (2018) 135–153.
[145]
Ke Wang, Liang Pu, Wenjie Dong, Cross-domain adaptive object detection based on refined knowledge transfer and mined guidance in autonomous vehicles, IEEE Transactions on Intelligent Vehicles 9 (1) (2024) 1899–1908,.
[146]
Ke Wang, Guoliang Zhao, Jianbo Lu, A deep analysis of visual slam methods for highly automated and autonomous vehicles in complex urban environment, IEEE Transactions on Intelligent Transportation Systems 25 (9) (2024) 10524–10541,.
[147]
Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, Y. Fu, Large scale incremental learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2019, pp. 374–382,.
[148]
R. Wu, H. Liu, Z. Yue, J.B. Li, C.W. Sham, Hyper-feature aggregation and relaxed distillation for class incremental learning, Pattern Recognit. 152 (2024).
[149]
M. Wulfmeier, A. Bewley, I. Posner, Incremental adversarial domain adaptation for continually changing environments, in: Proceedings of the 2018 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2018, pp. 4489–4495,.
[150]
H. Xu, B. Liu, L. Shu, P.S. Yu, Open-world learning and application to product classification, in: Proceedings of the World Wide Web Conference, WWW, 2019, pp. 3413–3419,.
[151]
R. Xu, P. Liu, Y. Zhang, F. Cai, J. Wang, S. Liang, H. Ying, J. Yin, Joint partial optimal transport for open set domain adaptation, in: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI, 2020, pp. 2540–2546,.
[152]
R. Xu, D. Wunsch, Survey of clustering algorithms, IEEE Trans. Neural Netw. 16 (2005) 645–678.
[153]
S. Xuan, M. Yang, S. Zhang, Incremental model enhancement via memory-based contrastive learning, Int. J. Comput. Vis. (2024) 1–19.
[154]
S. Yan, J. Xie, X. He, Der: Dynamically expandable representation for class incremental learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021, pp. 3014–3023,.
[155]
G. Yang, S. Yu, H. Dong, G. Slabaugh, P.L. Dragotti, X. Ye, F. Liu, S. Arridge, J. Keegan, Y. Guo, Dagan: deep de-aliasing generative adversarial networks for fast compressed sensing mri reconstruction, IEEE Trans. Med. Imaging 37 (2017) 1310–1321.
[156]
H. Yin, P. Molchanov, J.M. Alvarez, Z. Li, A. Mallya, D. Hoiem, N.K. Jha, J. Kautz, Dreaming to distill: Data-free knowledge transfer via deepinversion, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2020, pp. 8715–8724,.
[157]
H. Yu, Q. Dai, Self-supervised multi-task learning for medical image analysis, Pattern Recognit. 150 (2024).
[158]
Yue, Z., Wang, T., Sun, Q., Hua, X.S., Zhang, H., 2021. Counterfactual zero-shot and open-set visual recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 15404–15414.
[159]
A.R. Zamir, A. Sax, W. Shen, L.J. Guibas, J. Malik, S. Savarese, Taskonomy: Disentangling task transfer learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018, pp. 3712–3722,.
[160]
J. Zbontar, L. Jing, I. Misra, Y. LeCun, S. Deny, Barlow twins: Self-supervised learning via redundancy reduction, in: Proceedings of the 38th International Conference on Machine Learning, ICML, 2021, pp. 12310–12320. URL https://arxiv.org/abs/2103.03230.
[161]
Zhang, J., Chen, Z., Huang, J., Lin, L., Zhang, D., 2019. Few-shot structured domain adaptation for virtual-to-real scene parsing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. ICCVW, https://doi.org/10.1109/ICCVW.2019.00352.
[162]
Zhang, H., Li, A., Han, X., Chen, Z., Zhang, Y., Guo, Y., Improving open set domain adaptation using image-to-image translation. In: 2019 IEEE International Conference on Multimedia and Expo. ICME, IEEE, pp. 1258–1263.
[163]
K. Zhang, G. Lv, L. Wu, R. Hong, M. Wang, Emcrl: Em-enhanced negative sampling strategy for contrastive representation learning, IEEE Trans. Comput. Soc. Syst. (2024).
[164]
S. Zhang, N. Ran, Fine-grained and coarse-grained contrastive learning for text classification, Neurocomputing 596 (2024).
[165]
A. Zhao, G. Balakrishnan, F. Durand, J.V. Guttag, A.V. Dalca, Data augmentation using learned transformations for one-shot medical image segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2019, pp. 8543–8553,.
[166]
Zhao, B., Xiao, X., Gan, G., Zhang, B., Xia, S.T., 2020. Maintaining discrimination and fairness in class incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13208–13217.
[167]
W.S. Zheng, S. Gong, T. Xiang, Towards open-world person re-identification by one-shot group-based verification, IEEE Trans. Pattern Anal. Mach. Intell. 38 (2015) 591–606.
[168]
Zhong, Z., Fini, E., Roy, S., Luo, Z., Ricci, E., Sebe, N., 2021. Neighborhood contrastive learning for novel class discovery. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10867–10875.
[169]
D.W. Zhou, Z.W. Cai, H.J. Ye, D.C. Zhan, Z. Liu, Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need, Int. J. Comput. Vis. (2024) 1–21.
[170]
Zhou, J., Dong, L., Gan, Z., Wang, L., Wei, F., 2023. Non-contrastive learning meets language-image pre-training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11028–11038.
[171]
J. Zhu, G. Luo, B. Duan, Y. Zhu, Class incremental learning with deep contrastive learning and attention distillation, IEEE Signal Process. Lett. (2024).
[172]
F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, Q. He, A comprehensive survey on transfer learning, Proc. IEEE 109 (2020) 43–76.

Index Terms

  1. Towards open-world recognition: Critical problems and challenges
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Engineering Applications of Artificial Intelligence
          Engineering Applications of Artificial Intelligence  Volume 143, Issue C
          Mar 2025
          1535 pages

          Publisher

          Pergamon Press, Inc.

          United States

          Publication History

          Published: 20 February 2025

          Author Tags

          1. Open-world recognition
          2. Domain adaptation
          3. Few-shot learning
          4. Incremental learning

          Qualifiers

          • Short-survey

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 05 Mar 2025

          Other Metrics

          Citations

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media