[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Overcoming Catastrophic Forgetting in Continual Fine-Grained Urban Flow Inference

Published: 23 October 2024 Publication History

Abstract

Citywide fine-grained urban flow inference (FUFI) problem aims to infer the high-resolution flow maps from the coarse-grained ones, which plays an important role in sustainable and economic urban computing and intelligent traffic management. Previous models tackle this problem from spatial constraint, external factors, and memory cost. However, utilizing the new urban flow maps to calibrate the learned model is very challenging due to the “catastrophic forgetting” problem and is still under-explored. In this article, we make the first step in FUFI and present CUFAR—Continual Urban Flow inference with augmented Adaptive knowledge Replay—a novel framework for inferring the fine-grained citywide traffic flows. Specifically, (1) we design a spatial-temporal inference network that can extract better flow map features from both local and global levels; (2) then, we present an augmented adaptive knowledge replay (AKR) training algorithm to selectively replay the learned knowledge to facilitate the learning process of the model on new knowledge without forgetting. We apply several data augmentation techniques to improve the generalization capability of the learning model, gaining additional performance improvements. We also propose a knowledge discriminator to avoid the “negative replaying” issue introduced by noisy urban flow maps. Extensive experiments on two large-scale real-world FUFI datasets demonstrate that our proposed model consistently outperforms strong baselines and effectively mitigates the forgetting problem.

References

[1]
Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. 2017. Expert gate: Lifelong learning with a network of experts. In CVPR. 3366–3375.
[2]
Michael Arbel, Danica J. Sutherland, Mikołaj Bińkowski, and Arthur Gretton. 2018. On gradient regularizers for MMD GANs. In NeurIPS.
[3]
Mikołaj Bińkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. 2018. Demystifying MMD GANs. In ICLR.
[4]
Azzedine Boukerche, Yanjie Tao, and Peng Sun. 2020. Artificial intelligence-based vehicular traffic flow prediction methods for supporting intelligent transportation systems. Comput. Netw. 182 (2020), 107484.
[5]
Azzedine Boukerche and Jiahao Wang. 2020. Machine learning-based traffic prediction models for intelligent transportation systems. Comput. Netw. 181 (2020), 107530.
[6]
Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. 2020. Dark experience for general continual learning: A strong, simple baseline. In NeurIPS. 15920–15930.
[7]
Zekun Cai, Renhe Jiang, Xinyu Yang, Zhaonan Wang, Diansheng Guo, Hill Hiroki Kobayashi, Xuan Song, and Ryosuke Shibasaki. 2023. MemDA: Forecasting urban time series with memory-based drift adaptation. In CIKM. 193–202.
[8]
Chengtai Cao, Fan Zhou, Yurou Dai, and Jianping Wang. 2022. A survey of mix-based data augmentation: Taxonomy, methods, applications, and explainability. arXiv:2212.10888 (2022).
[9]
Gail A. Carpenter and Stephen Grossberg. 1987. ART 2: Self-organization of stable category recognition codes for analog input patterns. Appl. Opt. 26, 23 (1987), 4919–4930.
[10]
Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K. Dokania, Phillip H. S. Torr, and Marc’Aurelio Ranzato. 2019. Continual learning with tiny episodic memories. In ICML Workshop on Multi-task and Lifelong Reinforcement Learning.
[11]
Guangyong Chen, Pengfei Chen, Yujun Shi, Chang-Yu Hsieh, Benben Liao, and Shengyu Zhang. 2019. Rethinking the usage of batch normalization and dropout in the training of deep neural networks. arXiv:1905.05928 (2019).
[12]
Catherine Cleophas, Caitlin Cottrill, Jan Fabian Ehmke, and Kevin Tierney. 2019. Collaborative urban transportation: Recent advances in theory and practice. Eur. J. Operat. Res. 273, 3 (2019), 801–816.
[13]
Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. 2021. A continual learning survey: Defying forgetting in classification tasks. IEEE Trans. Pattern Anal. Mach. Intell. 44, 7 (2021), 3366–3385.
[14]
Terrance DeVries and Graham W. Taylor. 2017. Improved regularization of convolutional neural networks with cutout. arXiv:1708.04552 (2017).
[15]
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2015. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2 (2015), 295–307.
[16]
Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, and Chongjun Wang. 2021. ADARNN: Adaptive learning and forecasting of time series. In CIKM. 402–411.
[17]
Zheng Fang, Qingqing Long, Guojie Song, and Kunqing Xie. 2021. Spatial-temporal graph ODE networks for traffic flow forecasting. In SIGKDD. 364–373.
[18]
Ruicheng Feng, Jinjin Gu, Yu Qiao, and Chao Dong. 2019. Suppressing model overfitting for image super-resolution networks. In CVPR Workshops.
[19]
Qiang Gao, Goce Trajcevski, Fan Zhou, Kunpeng Zhang, Ting Zhong, and Fengli Zhang. 2018. Trajectory-based social circle inference. In SIGSPATIAL. 369–378.
[20]
Qiang Gao, Fan Zhou, Kunpeng Zhang, Fengli Zhang, and Goce Trajcevski. 2023. Adversarial human trajectory learning for trip recommendation. IEEE Trans. Neural Netw. Learn. Syst. 34, 4 (2023), 1764–1776.
[21]
Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. J. Mach. Learn. Res. 13, 25 (2012), 723–773.
[22]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770–778.
[23]
Weiwei Jiang and Jiayun Luo. 2022. Graph neural network for traffic forecasting: A survey. Expert Syst. Applic. (2022), 117921.
[24]
Guangyin Jin, Yuxuan Liang, Yuchen Fang, Zezhi Shao, Jincai Huang, Junbo Zhang, and Yu Zheng. 2023. Spatio-temporal graph neural networks for predictive learning in urban computing: A survey. IEEE Trans. Knowl. Data Eng. (2023).
[25]
Anirudh Ameya Kashyap, Shravan Raviraj, Ananya Devarakonda, Shamanth R. Nayak K, Santhosh K. V., and Soumya J. Bhat. 2022. Traffic flow prediction models–A review of deep learning techniques. Cogent Eng. 9, 1 (2022), 2010510.
[26]
Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate image super-resolution using very deep convolutional networks. In CVPR. 1646–1654.
[27]
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proc. Nat’l Acad. Sci. 114, 13 (2017), 3521–3526.
[28]
Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR. 4681–4690.
[29]
Ce Li, Rongpei Hong, Xovee Xu, Goce Trajcevski, and Fan Zhou. 2023. Simplifying temporal heterogeneous network for continuous-time link prediction. In CIKM. 1288–1297.
[30]
Jiyue Li, Senzhang Wang, Jiaqiang Zhang, Hao Miao, Junbo Zhang, and Philip Yu. 2022. Fine-grained urban flow inference with incomplete data. IEEE Trans. Knowl. Data Eng. (2022).
[31]
Kehan Li, Jiming Chen, Baosheng Yu, Zhangchong Shen, Chao Li, and Shibo He. 2020. Supreme: Fine-grained radio map reconstruction via spatial-temporal fusion network. In IPSN. 1–12.
[32]
Ting Li, Junbo Zhang, Kainan Bao, Yuxuan Liang, Yexin Li, and Yu Zheng. 2020. AutoST: Efficient neural architecture search for spatio-temporal prediction. In SIGKDD. 794–802.
[33]
Wendi Li, Xiao Yang, Weiqing Liu, Yingce Xia, and Jiang Bian. 2022. DGD-DA: Data distribution generation for predictable concept drift adaptation. In AAAI, Vol. 36. 4092–4100.
[34]
Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40, 12 (2017), 2935–2947.
[35]
Yuxuan Liang, Kun Ouyang, Lin Jing, Sijie Ruan, Ye Liu, Junbo Zhang, David S. Rosenblum, and Yu Zheng. 2019. UrbanFM: Inferring fine-grained urban flows. In SIGKDD. 3132–3142.
[36]
Yuxuan Liang, Kun Ouyang, Junkai Sun, Yiwei Wang, Junbo Zhang, Yu Zheng, David Rosenblum, and Roger Zimmermann. 2021. Fine-grained urban flow prediction. In WWW. 1833–1845.
[37]
Yuxuan Liang, Kun Ouyang, Yiwei Wang, Xu Liu, Hongyang Chen, Junbo Zhang, Yu Zheng, and Roger Zimmermann. 2022. TrajFormer: Efficient trajectory classification with transformers. In CIKM. 1229–1237.
[38]
Yuxuan Liang, Kun Ouyang, Yiwei Wang, Ye Liu, Junbo Zhang, Yu Zheng, and David S. Rosenblum. 2020. Revisiting convolutional neural networks for citywide crowd flow analytics. In ECML-PKDD. 578–594.
[39]
Yuxuan Liang, Kun Ouyang, Yiwei Wang, Zheyi Pan, Yifang Yin, Hongyang Chen, Junbo Zhang, Yu Zheng, David S. Rosenblum, and Roger Zimmermann. 2022. Mixed-order relation-aware recurrent neural networks for spatio-temporal forecasting. IEEE Trans. Knowl. Data Eng. (2022).
[40]
Feng Liu, Wenkai Xu, Jie Lu, Guangquan Zhang, Arthur Gretton, and Danica J. Sutherland. 2020. Learning deep kernels for non-parametric two-sample tests. In ICML. 6316–6326.
[41]
Xu Liu, Yutong Xia, Yuxuan Liang, Junfeng Hu, Yiwei Wang, Lei Bai, Chao Huang, Zhenguang Liu, Bryan Hooi, and Roger Zimmermann. 2024. LargeST: A benchmark dataset for large-scale traffic forecasting. In NeurIPS.
[42]
Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. 2022. Non-stationary transformers: Exploring the stationarity in time series forecasting. In NeurIPS, Vol. 35. 9881–9893.
[43]
David Lopez-Paz and Marc’Aurelio Ranzato. 2017. Gradient episodic memory for continual learning. In NIPS. 6470–6479.
[44]
Zhisheng Lu, Juncheng Li, Hong Liu, Chaoyan Huang, Linlin Zhang, and Tieyong Zeng. 2022. Transformer for single image super-resolution. In CVPR. 457–466.
[45]
Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott Sanner. 2022. Online continual learning in image classification: An empirical survey. Neurocomputing 469 (2022), 28–51.
[46]
Kun Ouyang, Yuxuan Liang, Ye Liu, Zekun Tong, Sijie Ruan, Yu Zheng, and David S. Rosenblum. 2022. Fine-grained urban flow inference. IEEE Trans. Knowl. Data Eng. 34, 06 (2022), 2755–2770.
[47]
Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, and Qiang Yang. 2009. Domain adaptation via transfer component analysis. In IJCAI. 1187–1192.
[48]
Haoxuan Qu, Hossein Rahmani, Li Xu, Bryan Williams, and Jun Liu. 2021. Recent advances of continual learning in computer vision: An overview. arXiv:2109.11369 (2021).
[49]
Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. 2019. Learning to learn without forgetting by maximizing transfer and minimizing interference. In ICLR.
[50]
Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. 2018. Overcoming catastrophic forgetting with hard attention to the task. In ICML. 4548–4557.
[51]
Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR. 1874–1883.
[52]
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR.
[53]
Thomas Vandal, Evan Kodra, Sangram Ganguly, Andrew Michaelis, Ramakrishna Nemani, and Auroop R. Ganguly. 2017. DeepSD: Generating high resolution climate change projections through single image super-resolution. In SIGKDD. 1663–1672.
[54]
Jeffrey S. Vitter. 1985. Random sampling with a reservoir. ACM Trans. Math. Softw. 11, 1 (1985), 37–57.
[55]
Dongjie Wang, Yanjie Fu, Kunpeng Liu, Fanglan Chen, Pengyang Wang, and Chang-Tien Lu. 2023. Automated urban planning for reimagining city configuration via adversarial learning: Quantification, generation, and evaluation. ACM Trans. Spatial Algor. Syst. 9, 1 (2023), 1–24.
[56]
Senzhang Wang, Jiannong Cao, Hao Chen, Hao Peng, and Zhiqiu Huang. 2020. SeqST-GAN: Seq2Seq generative adversarial nets for multi-step urban crowd flow prediction. ACM Trans. Spatial Algor. Syst. 6, 4 (2020), 1–24.
[57]
Zhaonan Wang, Renhe Jiang, Hao Xue, Flora D. Salim, Xuan Song, and Ryosuke Shibasaki. 2022. Event-aware multimodal mobility nowcasting. In AAAI, Vol. 36. 4228–4236.
[58]
Xovee Xu, Zhiyuan Wang, Qiang Gao, Ting Zhong, Bei Hui, Fan Zhou, and Goce Trajcevski. 2023. Spatial-temporal contrasting for fine-grained urban flow inference. IEEE Trans. Big Data9 (2023), 1711–1725.
[59]
Xovee Xu, Yutao Wei, Pengyu Wang, Xucheng Luo, Fan Zhou, and Goce Trajcevski. 2023. Diffusion probabilistic modeling for fine-grained urban traffic flow inference with relaxed structural constraint. In ICASSP.
[60]
Xovee Xu, Fan Zhou, Kunpeng Zhang, and Siyuan Liu. 2023. CCGL: Contrastive cascade graph learning. IEEE Trans. Knowl. Data Eng. 35, 5 (2023), 4539–4554.
[61]
Jaejun Yoo, Namhyuk Ahn, and Kyung-Ah Sohn. 2020. Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. In CVPR. 8375–8384.
[62]
Haoyang Yu, Xovee Xu, Ting Zhong, and Fan Zhou. 2022. Fine-grained urban flow inference via normalizing flows (student abstract). In AAAI. 13101–13102.
[63]
Haoyang Yu, Xovee Xu, Ting Zhong, and Fan Zhou. 2023. Overcoming forgetting in fine-grained urban flow inference via adaptive knowledge replay. In AAAI. 5393–5401.
[64]
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. 2019. CutMix: Regularization strategy to train strong classifiers with localizable features. In ICCV. 6023–6032.
[65]
Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In ICLR.
[66]
Junbo Zhang, Yu Zheng, and Dekang Qi. 2017. Deep spatio-temporal residual networks for citywide crowd flows prediction. In AAAI. 1655–1661.
[67]
Yu Zheng, Licia Capra, Ouri Wolfson, and Hai Yang. 2014. Urban computing: Concepts, methodologies, and applications. ACM Trans. Intell. Syst. Technol. 5, 3 (2014), 1–55.
[68]
Ting Zhong, Haoyang Yu, Rongfan Li, Xovee Xu, Xucheng Luo, and Fan Zhou. 2022. Probabilistic fine-grained urban flow inference with normalizing flows. In ICASSP. 3663–3667.
[69]
Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. 2020. Random erasing data augmentation. In AAAI. 13001–13008.
[70]
Fan Zhou, Yurou Dai, Qiang Gao, Pengyu Wang, and Ting Zhong. 2021. Self-supervised human mobility learning for next location prediction and trajectory classification. Knowl.-based Syst. 228 (2021), 107214.
[71]
Fan Zhou, Xin Jing, Liang Li, and Ting Zhong. 2021. Inferring high-resolutional urban flow with internet of mobile things. In ICASSP. 7948–7952.
[72]
Fan Zhou, Xin Jing, Xovee Xu, Ting Zhong, Goce Trajcevski, and Jin Wu. 2020. Continual information cascade learning. In GLOBECOM. 1–6.
[73]
Fan Zhou, Liang Li, Ting Zhong, Goce Trajcevski, Kunpeng Zhang, and Jiahao Wang. 2020. Enhancing urban flow maps via neural ODEs. In IJCAI. 1295–1302.
[74]
Fan Zhou, Hantao Wu, Goce Trajcevski, Ashfaq Khokhar, and Kunpeng Zhang. 2020. Semi-supervised trajectory understanding with POI attention for end-to-end trip recommendation. ACM Trans. Spatial Algor. Syst. 6, 2 (2020), 1–25.
[75]
Fan Zhou, Xovee Xu, Goce Trajcevski, and Kunpeng Zhang. 2021. A survey of information cascade analysis: Models, predictions, and recent advances. Comput. Surv. 54, 2 (2021), 36 pages.
[76]
Xinchi Zhou, Dongzhan Zhou, and Lingbo Liu. 2021. TRUFM: A transformer-guided framework for fine-grained urban flow inference. In ICONIP. 262–273.

Index Terms

  1. Overcoming Catastrophic Forgetting in Continual Fine-Grained Urban Flow Inference

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Spatial Algorithms and Systems
      ACM Transactions on Spatial Algorithms and Systems  Volume 10, Issue 4
      December 2024
      213 pages
      EISSN:2374-0361
      DOI:10.1145/3613737
      • Editor:
      • Mohamed Mokbel
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 23 October 2024
      Online AM: 20 April 2024
      Accepted: 08 April 2024
      Revised: 21 January 2024
      Received: 06 April 2023
      Published in TSAS Volume 10, Issue 4

      Check for updates

      Author Tags

      1. fine-grained urban flow inference
      2. urban computing
      3. continual learning
      4. spatio-temporal data mining

      Qualifiers

      • Research-article

      Funding Sources

      • National Science Foundation

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 308
        Total Downloads
      • Downloads (Last 12 months)308
      • Downloads (Last 6 weeks)29
      Reflects downloads up to 19 Dec 2024

      Other Metrics

      Citations

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media