Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow
<p>Power spectra of 40 events with <math display="inline"><semantics> <mrow> <mn>30</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">h</mi> </mrow> </semantics></math> or more obtained by <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">V</mi> </mrow> <mrow> <mi mathvariant="bold-italic">c</mi> <mi mathvariant="bold-italic">u</mi> <mi mathvariant="bold-italic">r</mi> <mi mathvariant="bold-italic">l</mi> </mrow> </msub> </mrow> </semantics></math> (panel (<b>a</b>); <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>c</mi> <mi>u</mi> <mi>r</mi> <mi>l</mi> </mrow> </msub> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics></math>), <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">V</mi> </mrow> <mrow> <mi mathvariant="bold-italic">d</mi> <mi mathvariant="bold-italic">i</mi> <mi mathvariant="bold-italic">v</mi> </mrow> </msub> </mrow> </semantics></math> (panel (<b>b</b>); <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>d</mi> <mi>i</mi> <mi>v</mi> </mrow> </msub> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics></math>), and the ratio between <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>c</mi> <mi>u</mi> <mi>r</mi> <mi>l</mi> </mrow> </msub> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>d</mi> <mi>i</mi> <mi>v</mi> </mrow> </msub> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics></math> is shown in panel (<b>c</b>). Here, the red solid line shows the characteristic spatial scale, <math display="inline"><semantics> <mrow> <mrow> <mrow> <mn>2</mn> <mi>π</mi> </mrow> <mo>/</mo> <mrow> <mi>k</mi> </mrow> </mrow> <mo>=</mo> <mn>100</mn> <mo> </mo> <mi mathvariant="normal">k</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> (see the main text for more details).</p> "> Figure 2
<p>The optical flow fields (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">V</mi> </mrow> <mrow> <mi mathvariant="bold-italic">f</mi> </mrow> </msub> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>t</mi> <mo>−</mo> <mo>∆</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>) estimated in three different time intervals (<math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>t</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mn>20</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>30</mn> </mrow> </semantics></math> min) for two examples (top panels: 01 UTC 30 July 2020; bottom panels: 17 UTC 11 June 2020).</p> "> Figure 3
<p>An overview of the proposed model, consisting of (<b>a</b>) an input generation stage using the optical flow methods and (<b>b</b>) a refining stage using conditional GAN.</p> "> Figure 4
<p>Structure of the generator and discriminator. (<b>a</b>) Generator (<span class="html-italic">G</span>). (<b>b</b>) Discriminator (<span class="html-italic">D</span>).</p> "> Figure 5
<p>Nowcasting outputs of an example case (01 UTC 30 July 2020) at 1–3 h lead times.</p> "> Figure 6
<p>The frequency distribution of motion fields, <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>N</mi> <mo>/</mo> <mi>d</mi> <mi>l</mi> <mi>o</mi> <mi>g</mi> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math>, for the sample (0100 UTC 30 July 2020). Black, red, blue, green, and magenta solid lines indicate the ground truth, multi-temporal model with cGAN, single-temporal model with cGAN, multi-temporal model without cGAN, and single-temporal model without cGAN, respectively.</p> "> Figure 7
<p>Power spectra of the precipitation fields for the sample (0100 UTC 30 July 2020).</p> "> Figure 8
<p>MAE measured using 40 events with <math display="inline"><semantics> <mrow> <mn>30</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">h</mi> </mrow> </semantics></math> or more.</p> "> Figure 9
<p>CSI, POD, and FAR scores at the thresholds of <math display="inline"><semantics> <mrow> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">h</mi> </mrow> </semantics></math> measured using 40 events with <math display="inline"><semantics> <mrow> <mn>30</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">h</mi> </mrow> </semantics></math> or more.</p> "> Figure 10
<p>CSI scores at the thresholds of <math display="inline"><semantics> <mrow> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">h</mi> </mrow> </semantics></math>, measured at 0–3 h lead times, using the dataset from the summer season (June to September) of 2020.</p> ">
Abstract
:1. Introduction
2. Method
2.1. Radar Dataset
2.2. Motion of Precipitation Field
2.3. Model Architecture
3. Results
3.1. Models for Performance Comparison
- ●
- Single-temporal model without cGAN: A model that includes a single-temporal optical flow field, , but excludes cGAN structure;
- ●
- Multi-temporal model without cGAN: A model that includes a multi-temporal optical flow field, , but excludes cGAN;
- ●
- Single-temporal model with cGAN: A model that includes a single-temporal optical flow field, , and cGAN structure;
- ●
- Multi-temporal model with cGAN: A model that includes a multi-temporal optical flow field, , and cGAN structure.
3.2. Qualitative Evaluation
3.3. Quantitative Evaluation
4. Summary and Discussion
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ayzel, G.; Heistermann, M.; Winterrath, T. Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1). Geosci. Model Dev. 2019, 12, 1387–1402. [Google Scholar] [CrossRef]
- Bechini, R.; Chandrasekar, V. An Enhanced Optical Flow Technique for Radar Nowcasting of Precipitation and Winds. J. Atmos. Ocean. Technol. 2017, 34, 2637–2658. [Google Scholar] [CrossRef]
- Bowler, N.E.H.; Pierce, C.E.; Seed, A. Development of a precipitation nowcasting algorithm based upon optical flow techniques. J. Hydrol. 2004, 288, 74–91. [Google Scholar] [CrossRef]
- Pulkkinen, S.; Nerini, D.; Perez Hortal, A.A.; Velasco-Forero, C.; Seed, A.; Germann, U.; Foresti, L. Pysteps: An open-source Python library for probabilistic precipitation nowcasting (v1.0). Geosci. Model Dev. 2019, 12, 4185–4219. [Google Scholar] [CrossRef]
- Seed, A.W.; Pierce, C.E.; Norman, K. Formulation and evaluation of a scale decomposition-based stochastic precipitation nowcast scheme. Water Resour. Res. 2013, 49, 6624–6641. [Google Scholar] [CrossRef]
- Shi, X.; Lee, Y.H.; Ha, J.-C.; Chang, D.-E.; Bellon, A.; Zawadzki, I.; Lee, G. McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation (MAPLE) Applied to the South Korean Radar Network. Part II: Real-Time Verification for the Summer Season. Asia-Pac. J. Atmos. Sci. 2010, 46, 383–391. [Google Scholar]
- Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W.-C. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, New Orleans, Louisiana, 7–10 December 2015; Volume 28. [Google Scholar]
- Shi, X.; Gao, Z.; Lausen, L.; Wang, H.; Yeung, D.-Y. Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Sønderby, C.K.; Espeholt, L.; Heek, J.; Dehghani, M.; Oliver, A.; Salimans, T.; Kalchbrenner, N. Metnet: A neural weather model for precipitation forecasting. arXiv 2020, arXiv:2003.12140. [Google Scholar]
- Agrawal, S.; Barrington, L.; Bromberg, C.; Burge, J.; Gazen, C.; Hickey, J. Machine learning for precipitation nowcasting from radar images. arXiv 2019, arXiv:1912.12132. [Google Scholar]
- Ayzel, G.; Scheffer, T.; Heistermann, M. RainNet v1.0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Dev. 2020, 13, 2631–2644. [Google Scholar] [CrossRef]
- Ko, J.; Lee, K.; Hwang, H.; Oh, S.G.; Son, S.W.; Shin, K. Effective training strategies for deep-learning-based precipitation nowcasting and estimation. Comput. Geosci. 2022, 161, 105072. [Google Scholar] [CrossRef]
- Lebedev, V.; Ivashkin, V.; Rudenko, I.; Ganshin, A.; Molchanov, A.; Ovcharenko, S.; Grokhovetskiy, R.; Bushmarinov, I.; Solomentsev, D. Precipitation nowcasting with satellite imagery. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2680–2688. [Google Scholar]
- Oh, S.G.; Park, C.; Son, S.W.; Ko, J.; Shin, K.; Kim, S.; Park, J. Evaluation of Deep-Learning-Based Very Short-Term Rainfall Forecasts in South Korea. Asia-Pac. J. Atmos. Sci. 2023, 59, 239–255. [Google Scholar] [CrossRef]
- Choi, S.; Kim, Y. Rad-cGAN v1.0: Radar-based precipitation nowcasting model with conditional generative adversarial networks for multiple dam domains. Geosci. Model Dev. 2022, 15, 5967–5985. [Google Scholar] [CrossRef]
- Kim, Y.; Hong, S. Very Short-Term Rainfall Prediction Using Ground Radar Observations and Conditional Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4104308. [Google Scholar] [CrossRef]
- Ravuri, S.; Lenc, K.; Willson, M.; Kangin, K.; Lam, R.; Mirowski, P.; Fitzsimons, M.; Atanassiadou, M.; Kashem, S.; Madge, S.; et al. Skilful precipitation nowcasting using deep generative models of radar. Nature 2021, 597, 672–677. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; Volume 27, pp. 2672–2680. [Google Scholar]
- Mirza, M.; Osindero, S. 2014: Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Kox, T.; Gerhold, L.; Ulbrich, U. Perception and use of uncertainty in severe weather warnings by emergency services in Germany. Atmos. Res. 2015, 158–159, 292–301. [Google Scholar] [CrossRef]
- Sivle, A.D.; Agersten, S.; Schmid, R.; Simon, A. Use and perception of weather forecast information across Europe. Meteorol. Appl. 2022, 29, e2053. [Google Scholar] [CrossRef]
- Jiang, H.; Sun, D.; Jampani, V.; Yang, M.-H.; Learned-Miller, E.; Kautz, J. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9000–9008. [Google Scholar]
- Liu, M.; Xu, C.; Yao, C.; Lin, C.; Zhao, Y. 2022: JNMR: Joint Non-linear Motion Regression for Video Frame Interpolation. arXiv 2022, arXiv:2206.04231. [Google Scholar]
- Liu, Y.; Xie, L.; Siyao, L.; Sun, W.; Qiao, Y.; Dong, C. Enhanced quadratic video interpolation. In Computer Vision–ECCV 2020 Workshops; Part IV 16; Springer International Publishing: Glasgow, UK, 2020; pp. 41–56. [Google Scholar]
- Xu, X.; Siyao, L.; Sun, W.; Yin, A.; Yang, M.-H. Quadratic video interpolation. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
- Zhang, Y.; Wang, C.; Tao, D. Video frame interpolation without temporal priors. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Online, 6–12 December 2020; Volume 33. [Google Scholar]
- Seo, M.; Choi, Y.; Ryu, H.; Park, H.; Bae, H.; Lee, H.; Seo, W. Intermediate and Future Frame Prediction of Geostationary Satellite Imagery with Warp and Refine Network. In Proceedings of the AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges, Arlington, VA, USA, 18 November 2022. [Google Scholar]
- Kwon, S.; Jung, S.H.; Lee, G. Inter-comparison of radar rainfall rate using Constant Altitude Plan Position Indicator and hybrid surface rainfall maps. J. Hydrol. 2015, 531, 234. [Google Scholar] [CrossRef]
- Wedel, A.; Pock, T.; Zach, C.; Bischof, H.; Cremers, D. An improved algorithm for TV-L1 optical flow. In Statistical and Geometrical Approaches to Visual Motion Analysis; Springer: Berlin/Heidelberg, Germany, 2009; pp. 23–45. [Google Scholar]
- Arfken, G.B.; Weber, H.J. Mathematical Methods for Physicists, International Edition, 6th ed.; Academic Press: San Diego, CA, USA, 2005; pp. 95–101. [Google Scholar]
- Ruzanski, E.; Chandrasekar, V. Scale Filtering for Improved Nowcasting Performance in a High-Resolution X-Band Radar Network. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2296–2307. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.; Zhou, T.; Efros, A.A. Image-to-Image Translation with conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Gultepe, I.; Heymsfield, A.J.; Field, P.R.; Axisa, D. Ice-Phase Precipitation. Meteorol. Monogr. 2017, 58, 6.1–6.36. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ha, J.-H.; Lee, H. Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow. Remote Sens. 2023, 15, 5169. https://doi.org/10.3390/rs15215169
Ha J-H, Lee H. Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow. Remote Sensing. 2023; 15(21):5169. https://doi.org/10.3390/rs15215169
Chicago/Turabian StyleHa, Ji-Hoon, and Hyesook Lee. 2023. "Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow" Remote Sensing 15, no. 21: 5169. https://doi.org/10.3390/rs15215169
APA StyleHa, J.-H., & Lee, H. (2023). Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow. Remote Sensing, 15(21), 5169. https://doi.org/10.3390/rs15215169