[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3627673.3679808acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

REDI: Recurrent Diffusion Model for Probabilistic Time Series Forecasting

Published: 21 October 2024 Publication History

Abstract

Time series forecasting (TSF) consists of point prediction and probabilistic forecasting. Unlike point forecasting which predicts an expected value of a future target, probabilistic time series forecasting models the uncertainty in data by predicting the distribution of future values, which enhances decision-making flexibility and improves risk management. Traditional probabilistic forecasting methods usually assume a fixed distribution of data, which is not always true for time series. Recently, there have been efforts to adapt diffusion models for time series owing to their exceptional ability to model the distribution of data without prior assumptions. However, how to apply advantages of diffusion models to time series forecasting remains a substantial challenge due to specific issues in time series such as distribution drift and complex dynamic temporal patterns.
In this paper, we focus on the adaptation of diffusion models for time series forecasting. We propose REDI, a recurrent diffusion model that achieves effective probabilistic time series prediction with recurrent forward diffusion process and step-aware guidance in backward denoising process. The recurrent forward diffusion process enables the model to pay more attention to the impact of recent history on future values during the diffusion process, while the step-aware guidance facilitates precise guidance based on historical information during the denoising process. We conduct experiments on 5 real-world datasets and achieve average rankings of 1.8 for deterministic metrics and 1.5 for probabilistic metrics across 12 baselines, which strongly demonstrates the effectiveness of REDI.

References

[1]
JuanMiguelLopez Alcaraz and Nils Strodthoff. 2022. Diffusion-based Time Series Imputation and Forecasting with Structured State Space Models. Transactions on Machine Learning Research (2022).
[2]
Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. 2021. Structured Denoising Diffusion Models in Discrete State-Spaces. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS '21). Curran Associates, New York, 17981--17993.
[3]
Omri Avrahami, Dani Lischinski, and Ohad Fried. 2022. Blended Diffusion for Text-driven Editing of Natural Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR '22). IEEE, Los Alamitos, 18208--18218.
[4]
Dmitry Baranchuk, Andrey Voynov, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. 2022. Label-Efficient Semantic Segmentation with Diffusion Models. In Proceedings of the 10th International Conference on Learning Representations (ICLR '22). OpenReview.net.
[5]
Prafulla Dhariwal and Alexander Quinn Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS '21). Curran Associates, New York, 8780--8794.
[6]
Wei Fan, Pengyang Wang, Dongkun Wang, Dongjie Wang, Yuanchun Zhou, and Yanjie Fu. 2023. Dish-TS: A General Paradigm for Alleviating Distribution Shift in Time Series Forecasting. Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI '23), Vol. 37, 6 (2023), 7522--7529.
[7]
Albert Gu, Karan Goel, and Christopher Ré. 2022. Efficiently Modeling Long Sequences with Structured State Spaces. In Proceedings of the 10th International Conference on Learning Representations (ICLR '22). OpenReview.net.
[8]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS '20). Curran Associates, New York, 6840--6851.
[9]
Firas Khader, Gustav Mueller-Franzes, Soroosh Tayebi Arasteh, Tianyu Han, Christoph Haarburger, Maximilian Schulze-Hagen, Philipp Schad, Sandy Engelhardt, Bettina Baessler, Sebastian Foersch, Johannes Stegmaier, Christiane Kuhl, Sven Nebelung, Jakob Nikolas Kather, and Daniel Truhn. 2023. Denoising diffusion probabilistic models for 3D medical image generation. Scientific Reports, Vol. 13, 1 (2023), 7303.
[10]
Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, and Yuyang Wang. 2023. Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting. In Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS '23), Vol. 36. Curran Associates, 28341--28364.
[11]
Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2021. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In Proceedings of the 9th International Conference on Learning Representations (ICLR '21). OpenReview.net.
[12]
Alireza Koochali, Andreas Dengel, and Sheraz Ahmed. 2021. If you like it, gan it?probabilistic multivariate times series forecast with gan. Engineering proceedings, Vol. 5, 1 (2021), 40.
[13]
Xiang Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B. Hashimoto. 2022. Diffusion-LM Improves Controllable Text Generation. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS '22). Curran Associates, New York, 4328--4343.
[14]
Yan Li, Xinjiang Lu, Yaqing Wang, and Dejing Dou. 2022. Generative Time Series Forecasting with Diffusion, Denoise, and Disentanglement. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS '22). Curran Associates, New York, 23009--23022.
[15]
Bryan Lim, Sercan Ö Arik, Nicolas Loeff, and Tomas Pfister. 2021. Temporal fusion transformers for interpretable multi-horizon time series forecasting. International Journal of Forecasting, Vol. 37, 4 (2021), 1748--1764.
[16]
Lequan Lin, Zhengkun Li, Ruikun Li, Xuliang Li, and Junbin Gao. 2023. Diffusion models for time-series applications: a survey. Frontiers of Information Technology & Electronic Engineering (Dec 2023), 1--23.
[17]
Zhenghao Lin, Yeyun Gong, Yelong Shen, Tong Wu, Zhihao Fan, Chen Lin, Nan Duan, and Weizhu Chen. 2023. Text Generation with Diffusion Language Models: A Pre-training Approach with Continuous Paragraph Denoise. In Proceedings of the 40st International Conference on Machine Learning (ICML '23). PMLR, New York, 21051--21064.
[18]
Kangfu Mei and Vishal Patel. 2023. Vidm: Video implicit diffusion models. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI '23). AAAI Press, Menlo Park, 9117--9125.
[19]
Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. 2023. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. In Proceedings of the 11th International Conference on Learning Representations (ICLR '23). OpenReview.net.
[20]
Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. 2021. Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting. In Proceedings of the 38th International Conference on Machine Learning (ICML '21'). PMLR, New York, 8857--8868.
[21]
Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs M. Bergmann, and Roland Vollgraf. 2021. Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows. In Proceedings of the 9th International Conference on Learning Representations (ICLR '21). OpenReview.net.
[22]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR '22). IEEE, Los Alamitos, 10684--10695.
[23]
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L. Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS '22). Curran Associates, New York, 36479--36494.
[24]
David Salinas, Michael Bohlke-Schneider, Laurent Callot, Roberto Medico, and Jan Gasthaus. 2019. High-dimensional multivariate forecasting with low-rank gaussian copula processes. Proceedings of the 33th Conference on Neural Information Processing Systems (NeurIPS '19), Vol. 32 (2019).
[25]
David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. 2020. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, Vol. 36, 3 (2020), 1181--1191.
[26]
Nikolay Savinov, Junyoung Chung, Mikolaj Binkowski, Erich Elsen, and Aäron van den Oord. 2022. Step-unrolled Denoising Autoencoders for Text Generation. In Proceedings of the 10th International Conference on Learning Representations (ICLR '22). OpenReview.net.
[27]
Lifeng Shen and James T. Kwok. 2023. Non-autoregressive Conditional Diffusion Models for Time Series Prediction. In Proceedings of the 40st International Conference on Machine Learning (ICML '23). PMLR, New York, 31016--31029.
[28]
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning (ICML '15). JMLR-Journal Machine Learning Research, San Diego, 2256--2265.
[29]
Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In Proceedings of the 9th International Conference on Learning Representations (ICLR '21). OpenReview.net.
[30]
Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. 2021. CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS '21). Curran Associates, New York, 24804--24816.
[31]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. Advances in neural information processing systems, Vol. 30 (2017).
[32]
Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, and Steven C. H. Hoi. 2023. Learning Deep Time-index Models for Time Series Forecasting. In Proceedings of the 40st International Conference on Machine Learning (ICML '23). PMLR, New York, 37217--37237.
[33]
Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. 2021. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in neural information processing systems, Vol. 34 (2021), 22419--22430.
[34]
Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 2021. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI '21), Vol. 35. AAAI Press, Menlo Park, 11106--11115.
[35]
Tian Zhou, Ziqing Ma, Qingsong Wen, Liang Sun, Tao Yao, Wotao Yin, Rong Jin, et al. 2022. Film: Frequency improved legendre memory model for long-term time series forecasting. Advances in Neural Information Processing Systems, Vol. 35 (2022), 12677--12690.
[36]
Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. 2022. FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting. In Proceedings of the 39th International Conference on Machine Learning (ICML '22) (Proceedings of Machine Learning Research, Vol. 162). PMLR, New York, 27268--27286.
[37]
Yunyi Zhou, Zhixuan Chu, Yijia Ruan, Ge Jin, Yuchen Huang, and Sheng Li. 2023. pTSE: A Multi-model Ensemble Method for Probabilistic Time Series Forecasting. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI '23). ijcai.org, 4684--4692.

Index Terms

  1. REDI: Recurrent Diffusion Model for Probabilistic Time Series Forecasting

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge Management
      October 2024
      5705 pages
      ISBN:9798400704369
      DOI:10.1145/3627673
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 October 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. diffusion models
      2. generative models
      3. probabilistic time series forecasting

      Qualifiers

      • Research-article

      Funding Sources

      • National Key Research and Development Plan Project
      • Shanghai Science and Technology Development Fund
      • National Natural Science Foundation of China Projects

      Conference

      CIKM '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

      Upcoming Conference

      CIKM '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 162
        Total Downloads
      • Downloads (Last 12 months)162
      • Downloads (Last 6 weeks)51
      Reflects downloads up to 15 Jan 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media