[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3637528.3671581acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Open access

Class-incremental Learning for Time Series: Benchmark and Evaluation

Published: 24 August 2024 Publication History

Abstract

Real-world environments are inherently non-stationary, frequently introducing new classes over time. This is especially common in time series classification, such as the emergence of new disease classification in healthcare or the addition of new activities in human activity recognition. In such cases, a learning system is required to assimilate novel classes effectively while avoiding catastrophic forgetting of the old ones, which gives rise to the Class-incremental Learning (CIL) problem. However, despite the encouraging progress in the image and language domains, CIL for time series data remains relatively understudied. Existing studies suffer from inconsistent experimental designs, necessitating a comprehensive evaluation and benchmarking of methods across a wide range of datasets. To this end, we first present an overview of the Time Series Class-incremental Learning (TSCIL) problem, highlight its unique challenges, and cover the advanced methodologies. Further, based on standardized settings, we develop a unified experimental framework that supports the rapid development of new algorithms, easy integration of new datasets, and standardization of the evaluation process. Using this framework, we conduct a comprehensive evaluation of various generic and time-series-specific CIL methods in both standard and privacy-sensitive scenarios. Our extensive experiments not only provide a standard baseline to support future research but also shed light on the impact of various design factors such as normalization layers or memory budget thresholds. Codes are available at https://github.com/zqiao11/TSCIL.

Supplemental Material

MP4 File - ads0369-video
Despite developments in image domain, Class-incremental learning (CIL) for Time Series Data remains relatively under explored. To facilitate the research in this direction, we establish a standard and extensible Time Series Class-incremental Learning (TSCIL) framework in this paper. We include 5 real-world datasets, over 10 representative algorithms, and use standard learning protocols for experiment. Using this framework, we conduct a comprehensive evaluation and discuss a series of questions, such as the performance between common CIL techniques, the effect of normalization layers, generative replay for data privacy concerns and the role of intra-class variations.
MP4 File - ads0369-video
Despite developments in image domain, Class-incremental learning (CIL) for Time Series Data remains relatively under explored. To facilitate the research in this direction, we establish a standard and extensible Time Series Class-incremental Learning (TSCIL) framework in this paper. We include 5 real-world datasets, over 10 representative algorithms, and use standard learning protocols for experiment. Using this framework, we conduct a comprehensive evaluation and discuss a series of questions, such as the performance between common CIL techniques, the effect of normalization layers, generative replay for data privacy concerns and the role of intra-class variations.

References

[1]
Ahmed Alaa, Alex James Chan, and Mihaela van der Schaar. 2021. Generative Time-series Modeling with Fourier Flows. In International Conference on Learning Representations.
[2]
Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. 2018. Memory aware synapses: Learning what (not) to forget. In ECCV. 139--154.
[3]
Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. 2017. Expert gate: Lifelong learning with a network of experts. In CVPR. 3366--3375.
[4]
Kerem Altun and Billur Barshan. 2010. Human activity recognition using inertial/magnetic sensor units. In International workshop on human behavior understanding. Springer, 38--51.
[5]
Jacob Armstrong and David A Clifton. 2022. Continual learning of longitudinal health records. In 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI). IEEE, 01--06.
[6]
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016).
[7]
Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. 2020. Dark experience for general continual learning: a strong, simple baseline. NIPS, Vol. 33 (2020), 15920--15930.
[8]
Antonio Carta, Lorenzo Pellegrini, Andrea Cossu, Hamed Hemati, and Vincenzo Lomonaco. 2023. Avalanche: A PyTorch Library for Deep Continual Learning. Journal of Machine Learning Research, Vol. 24, 363 (2023), 1--6. http://jmlr.org/papers/v24/23-0130.html
[9]
Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. 2018. Efficient Lifelong Learning with A-GEM. In ICLR.
[10]
Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, P Dokania, P Torr, and M Ranzato. 2019. Continual learning with tiny episodic memories. In Workshop on Multi-Task and Lifelong Reinforcement Learning.
[11]
Jagmohan Chauhan, Young D Kwon, Pan Hui, and Cecilia Mascolo. 2020. Contauth: Continual learning framework for behavioral-based user authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 4, 4 (2020), 1--23.
[12]
Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. 2016. Net2Net: Accelerating Learning via Knowledge Transfer. arxiv: 1511.05641 [cs.LG]
[13]
Nikhil Churamani, Jiaee Cheong, Sinan Kalkan, and Hatice Gunes. 2023. Towards Causal Replay for Knowledge Rehearsal in Continual Learning. In Proceedings of The First AAAI Bridge Program on Continual Causality (Proceedings of Machine Learning Research, Vol. 208), Martin Mundt, Keiland W. Cooper, Devendra Singh Dhami, Adéle Ribeiro, James Seale Smith, Alexis Bellot, and Tyler Hayes (Eds.). PMLR, 63--70.
[14]
Andrea Cossu, Antonio Carta, and Davide Bacciu. 2020. Continual learning with gated incremental memories for sequential data processing. In IJCNN. IEEE, 1--8.
[15]
Marco Cuturi and Mathieu Blondel. 2017. Soft-dtw: a differentiable loss function for time-series. In ICML. PMLR, 894--903.
[16]
Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Alevs Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. 2021. A continual learning survey: Defying forgetting in classification tasks. TPAMI, Vol. 44, 7 (2021), 3366--3385.
[17]
Abhyuday Desai, Cynthia Freeman, Zuhui Wang, and Ian Beaver. 2021. Timevae: A variational auto-encoder for multivariate time series generation. arXiv preprint arXiv:2111.08095 (2021).
[18]
Chris Donahue, Julian McAuley, and Miller Puckette. 2019. Adversarial Audio Synthesis. In ICLR.
[19]
Lea Duncker, Laura Driscoll, Krishna V Shenoy, Maneesh Sahani, and David Sussillo. 2020. Organizing recurrent network dynamics by task-computation to enable continual learning. NIPS, Vol. 33 (2020), 14387--14397.
[20]
Ashutosh Dhar Dwivedi, Gautam Srivastava, Shalini Dhar, and Rajani Singh. 2019. A decentralized privacy-preserving healthcare blockchain for IoT. Sensors, Vol. 19, 2 (2019), 326.
[21]
Benjamin Ehret, Christian Henning, Maria Cervera, Alexander Meulemans, Johannes Von Oswald, and Benjamin F Grewe. 2021. Continual learning in recurrent neural networks. In ICLR.
[22]
Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In ICASSP. IEEE, 776--780.
[23]
Stephen T Grossberg. 2012. Studies of mind and brain: Neural principles of learning, perception, development, cognition, and motor control. Vol. 70. Springer Science & Business Media.
[24]
Caglar Gulcehre, Sarath Chandar, and Yoshua Bengio. 2017. Memory augmented neural networks with wormhole connections. arXiv preprint arXiv:1701.08718 (2017).
[25]
Vibhor Gupta, Jyoti Narwariya, Pankaj Malhotra, Lovekesh Vig, and Gautam Shroff. 2021. Continual learning for multivariate time series tasks with variable input dimensions. In 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 161--170.
[26]
Shawn Hershey, Sourish Chaudhuri, Daniel P. W. Ellis, Jort F. Gemmeke, Aren Jansen, R. Channing Moore, Manoj Plakal, Devin Platt, Rif A. Saurous, Bryan Seybold, Malcolm Slaney, Ron J. Weiss, and Kevin Wilson. 2017. CNN architectures for large-scale audio classification. In ICASSP 2017. 131--135.
[27]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[28]
Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. 2019. Learning a unified classifier incrementally via rebalancing. In CVPR. 831--839.
[29]
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Deep learning for time series classification: a review. Data mining and knowledge discovery, Vol. 33, 4 (2019), 917--963.
[30]
Dewi Pramudi Ismi, Shireen Panchoo, and Murinto Murinto. 2016. K-means clustering based filter feature selection on high dimensional data. International Journal of Advances in Intelligent Informatics, Vol. 2 (2016), 38--45.
[31]
Saurav Jha, Martin Schiemer, and Juan Ye. 2020. Continual learning in human activity recognition: An empirical analysis of regularization. arXiv preprint arXiv:2007.03032 (2020).
[32]
Saurav Jha, Martin Schiemer, Franco Zambonelli, and Juan Ye. 2021. Continual learning in sensor-based human activity recognition: An empirical benchmark analysis. Information Sciences, Vol. 575 (2021), 1--21.
[33]
Zixuan Ke and Bing Liu. 2022. Continual learning of natural language processing tasks: A survey. arXiv preprint arXiv:2211.12701 (2022).
[34]
James Kirkpatrick, Razvan Pascanu, et al. 2017. Overcoming catastrophic forgetting in neural networks. PNAS, Vol. 114, 13 (2017), 3521--3526.
[35]
Dani Kiyasseh, Tingting Zhu, and David Clifton. 2021. A clinical deep learning framework for continually learning from cardiac signals across diseases, time, modalities, and institutions. Nature Communications, Vol. 12, 1 (2021), 4221.
[36]
Young D Kwon, Jagmohan Chauhan, Abhishek Kumar, Pan Hui HKUST, and Cecilia Mascolo. 2021. Exploring system performance of continual learning for mobile and embedded sensing applications. In 2021 IEEE/ACM Symposium on Edge Computing (SEC). IEEE, 319--332.
[37]
Young D Kwon, Jagmohan Chauhan, and Cecilia Mascolo. 2021. Fasticarl: Fast incremental classifier and representation learning with efficient budget allocation in audio sensing applications. arXiv preprint arXiv:2106.07268 (2021).
[38]
Depeng Li, Tianqi Wang, Junwei Chen, Kenji Kawaguchi, Cheng Lian, and Zhigang Zeng. 2024. Multi-view class incremental learning. Information Fusion, Vol. 102 (2024), 102021.
[39]
Depeng Li, Tianqi Wang, Junwei Chen, Qining Ren, Kenji Kawaguchi, and Zhigang Zeng. 2024. Towards Continual Learning Desiderata via HSIC-Bottleneck Orthogonalization and Equiangular Embedding. arXiv preprint arXiv:2401.09067 (2024).
[40]
Depeng Li and Zhigang Zeng. 2023. CRNet: A Fast Continual Learning Framework With Random Theory. TPAMI (2023).
[41]
Yanxiong Li, Wenchang Cao, Jialong Li, Wei Xie, and Qianhua He. 2023. Few-shot Class-incremental Audio Classification Using Stochastic Classifier. arXiv preprint arXiv:2306.02053 (2023).
[42]
Yanxiong Li, Wenchang Cao, Wei Xie, Jialong Li, and Emmanouil Benetos. 2023. Few-shot Class-incremental Audio Classification Using Dynamically Expanded Classifier with Self-attention Modified Prototypes. IEEE Transactions on Multimedia (2023).
[43]
Zhizhong Li and Derek Hoiem. 2016. Learning Without Forgetting. In ECCV. Springer, 614--629.
[44]
Jiayang Liu, Zhen Wang, Lin Zhong, Jehan Wickramasuriya, and Venu Vasudevan. 2009. uWave: Accelerometer-based personalized gesture recognition and its applications. In 2009 IEEE International Conference on Pervasive Computing and Communications. 1--9. https://doi.org/10.1109/PERCOM.2009.4912759
[45]
Yaoyao Liu, Yuting Su, An-An Liu, Bernt Schiele, and Qianru Sun. 2020. Mnemonics training: Multi-class incremental learning without forgetting. In CVPR. 12245--12254.
[46]
David Lopez-Paz and Marc'Aurelio Ranzato. 2017. Gradient episodic memory for continual learning. In NeurIPS. 6467--6476.
[47]
Reem A Mahmoud and Hazem Hajj. 2022. Multi-objective Learning to Overcome Catastrophic Forgetting in Time-series Applications. ACM Transactions on Knowledge Discovery from Data (TKDD), Vol. 16, 6 (2022), 1--20.
[48]
Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott Sanner. 2022. Online continual learning in image classification: An empirical survey. Neurocomputing, Vol. 469 (2022), 28--51.
[49]
Marc Masana, Xialei Liu, Bart?omiej Twardowski, Mikel Menta, Andrew D Bagdanov, and Joost Van De Weijer. 2022. Class-incremental learning: survey and performance evaluation on image classification. TPAMI, Vol. 45, 5 (2022), 5513--5533.
[50]
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. Vol. 24. Elsevier, 109--165.
[51]
Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. 2022. A time series is worth 64 words: Long-term forecasting with transformers. arXiv preprint arXiv:2211.14730 (2022).
[52]
Quang Pham, Chenghao Liu, and HOI Steven. 2022. Continual Normalization: Rethinking Batch Normalization for Online Continual Learning. In ICLR.
[53]
Ashirbad Pradhan, Jiayuan He, and Ning Jiang. 2022. Multi-day dataset of forearm and wrist electromyogram for hand gesture recognition and biometrics. Scientific Data, Vol. 9, 1 (2022), 733.
[54]
Zhongzheng Qiao, Minghui Hu, Xudong Jiang, Ponnuthurai Nagaratnam Suganthan, and Ramasamy Savitha. 2023. Class-Incremental Learning on Multivariate Time Series Via Shape-Aligned Temporal Distillation. In ICASSP. IEEE, 1--5.
[55]
Xiangfei Qiu, Jilin Hu, Lekui Zhou, Xingjian Wu, Junyang Du, Buang Zhang, Chenjuan Guo, Aoying Zhou, Christian S Jensen, Zhenli Sheng, et al. 2024. TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods. arXiv preprint arXiv:2403.20150 (2024).
[56]
Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, and Xiaoli Li. 2023. Adatime: A benchmarking suite for domain adaptation on time series data. ACM Transactions on Knowledge Discovery from Data, Vol. 17, 8 (2023), 1--18.
[57]
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In CVPR. 2001--2010.
[58]
Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. 2018. Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference. In ICLR.
[59]
David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P Lillicrap, and Greg Wayne. 2019. Experience replay for continual learning. In NeurIPS. 350--360.
[60]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. IJCV, Vol. 115, 3 (2015), 211--252.
[61]
Martin Schiemer, Lei Fang, Simon Dobson, and Juan Ye. 2023. Online continual learning for human activity recognition. Pervasive and Mobile Computing (2023), 101817.
[62]
Anja Shevchyk, Rui Hu, Kevin Thandiackal, Michael Heizmann, and Thomas Brunschwiler. 2022. Privacy preserving synthetic respiratory sounds for class incremental learning. Smart Health, Vol. 23 (2022), 100232.
[63]
Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, and Jongseong Jang. 2021. Online Class-Incremental Continual Learning with Adversarial Shapley Value. In AAAI. 9630--9638.
[64]
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. In NeurIPS. 2990--2999.
[65]
James Seale Smith, Junjiao Tian, Shaunak Halbe, Yen-Chang Hsu, and Zsolt Kira. 2023. A closer look at rehearsal-free continual learning. In CVPR. 2409--2419.
[66]
Shagun Sodhani, Sarath Chandar, and Yoshua Bengio. 2020. Toward training recurrent neural networks for lifelong learning. Neural computation, Vol. 32, 1 (2020), 1--35.
[67]
Le Sun, Mingyang Zhang, Benyou Wang, and Prayag Tiwari. 2023. Few-Shot Class-Incremental Learning for Medical Time Series Classification. IEEE Journal of Biomedical and Health Informatics (2023).
[68]
Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016).
[69]
Gido M van de Ven and Andreas S Tolias. 2019. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734 (2019).
[70]
Gido M van de Ven, Tinne Tuytelaars, and Andreas S Tolias. 2022. Three types of incremental learning. Nature Machine Intelligence (2022), 1--13.
[71]
Jeffrey S Vitter. 1985. Random sampling with a reservoir. ACM Transactions on Mathematical Software (TOMS), Vol. 11, 1 (1985), 37--57.
[72]
Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. 2023. A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487 (2023).
[73]
Shaokun Wang, Weiwei Shi, Songlin Dong, Xinyuan Gao, Xiang Song, and Yihong Gong. 2023. Semantic Knowledge Guided Class-Incremental Learning. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 33, 10 (2023), 5921--5931.
[74]
Shaokun Wang, Weiwei Shi, Yuhang He, Yifan Yu, and Yihong Gong. 2023. Non-Exemplar Class-Incremental Learning via Adaptive Old Class Reconstruction. In Proceedings of the 31st ACM International Conference on Multimedia. New York, NY, USA, 4524--4534.
[75]
Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis, and Laurent Charlin. 2019. Continual learning of new sound classes using generative replay. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. IEEE, 308--312.
[76]
Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis, and Laurent Charlin. 2019. Continual Learning of New Sound Classes Using Generative Replay. In 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). 308--312. https://doi.org/10.1109/WASPAA.2019.8937236
[77]
Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. 2022. Learning to prompt for continual learning. In CVPR. 139--149.
[78]
Gary M Weiss. 2019. Wisdm smartphone and smartwatch activity and biometrics dataset. UCI Machine Learning Repository: WISDM Smartphone and Smartwatch Activity and Biometrics Dataset Data Set, Vol. 7 (2019), 133190--133202.
[79]
Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. 2022. Timesnet: Temporal 2d-variation modeling for general time series analysis. In The eleventh international conference on learning representations.
[80]
Weiran Yao, Yuewen Sun, Alex Ho, Changyin Sun, and Kun Zhang. 2022. Learning Temporally Causal Latent Processes from General Temporal Data. In International Conference on Learning Representations.
[81]
Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz. 2020. Dreaming to distill: Data-free knowledge transfer via deepinversion. In CVPR. 8715--8724.
[82]
Shao-Yu Yin, Yu Huang, Tien-Yu Chang, Shih-Fang Chang, and Vincent S Tseng. 2023. Continual learning with attentive recurrent neural networks for temporal data classification. Neural Networks, Vol. 158 (2023), 171--187.
[83]
Longhui Yu, Tianyang Hu, Lanqing HONG, Zhen Liu, Adrian Weller, and Weiyang Liu. 2023. Continual Learning by Modeling Intra-Class Variation. Transactions on Machine Learning Research (2023).
[84]
Yunfei Zhang, Xiaoyang Huo, Tianyi Chen, Si Wu, and Hau San Wong. 2023. Exploring Intra-class Variation Factors with Learnable Cluster Prompts for Semi-supervised Image Synthesis. In CVPR. 7392--7401.
[85]
Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, and Ziwei Liu. 2023. Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648 (2023).
[86]
Tian Zhou, Peisong Niu, Liang Sun, Rong Jin, et al. 2023. One fits all: Power general time series analysis by pretrained lm. Advances in neural information processing systems, Vol. 36 (2023), 43322--43355.
[87]
Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng-Lin Liu. 2021. Prototype Augmentation and Self-Supervision for Incremental Learning. In CVPR. 5871--5880.

Cited By

View all
  • (2024)A Comprehensive Survey on Emerging Techniques and Technologies in Spatio-Temporal EEG Data AnalysisChinese Journal of Information Fusion10.62762/CJIF.2024.8768301:3(183-211)Online publication date: 15-Dec-2024
  • (2024)Effect of Regularization-based Continual Learning in Instance-incremental Learning Scenario on Long-term Surface Electromyogram-based Pattern RecognitionAdvanced Biomedical Engineering10.14326/abe.13.36313(363-373)Online publication date: 2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
August 2024
6901 pages
ISBN:9798400704901
DOI:10.1145/3637528
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 August 2024

Check for updates

Author Tags

  1. class-incremental learning
  2. continual learning
  3. time series classification

Qualifiers

  • Research-article

Conference

KDD '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)261
  • Downloads (Last 6 weeks)60
Reflects downloads up to 22 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Comprehensive Survey on Emerging Techniques and Technologies in Spatio-Temporal EEG Data AnalysisChinese Journal of Information Fusion10.62762/CJIF.2024.8768301:3(183-211)Online publication date: 15-Dec-2024
  • (2024)Effect of Regularization-based Continual Learning in Instance-incremental Learning Scenario on Long-term Surface Electromyogram-based Pattern RecognitionAdvanced Biomedical Engineering10.14326/abe.13.36313(363-373)Online publication date: 2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media