Severe Precipitation Recognition Using Attention-UNet of Multichannel Doppler Radar
"> Figure 1
<p>Illustration of 2D wavetlet transform (WT).</p> "> Figure 2
<p>Distribution of random patches in a natural photo and a Doppler radar image.</p> "> Figure 3
<p>The structure of non-local [<a href="#B34-remotesensing-15-01111" class="html-bibr">34</a>] attention. <span class="html-italic">N</span>, <span class="html-italic">C</span>, <span class="html-italic">H</span>, and <span class="html-italic">W</span> stand for the batch size, channels, height, and weight of the input image <span class="html-italic">X</span>, respectively. <math display="inline"><semantics> <msub> <mi>w</mi> <mi>q</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>w</mi> <mi>k</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>w</mi> <mi>v</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>w</mi> <mi>o</mi> </msub> </semantics></math> stand for the different convolution kernels. <span class="html-italic">S</span> and <math display="inline"><semantics> <msup> <mi>Z</mi> <mo>′</mo> </msup> </semantics></math> are the middle layers. <span class="html-italic">Z</span> is the output of the non-local attention.</p> "> Figure 4
<p>Implementation of different attention mechanisms. (<b>a</b>) is non-local attention. (<b>b</b>) is axial attention. (<b>c</b>) is criss-cross attention.</p> "> Figure 5
<p>The network structure of the RRED-Net.</p> "> Figure 6
<p>The occurrence frequencies of different rainfall intensity in our dataset, in Anhui province of China, from April to October from 2016 to 2018 (except October in 2018).</p> "> Figure 7
<p>A visualization example of Doppler radar features and correspoding precipitation label, which comes from 4:01 Beijing Time, 14 July 2016. The left side of the figure is different channels (products) of Doppler radar, while the right side of the figure is the rainfall label.</p> "> Figure 8
<p>Visualization examples. The precipitation images come from 18:20–18:26 BT (Beijing Time), 2 July 2018; 20:38–20:42 BT, 26 July 2018; 22:43–20:48 BT, 4 July 2018; 09:30–09:36 BT, 26 July 2018; 10:40–10:46 BT, 5 July 2018 from up to down, respectively. CR Doppler radar, predictions of Label, and RRED-Net from left to right, respectively.</p> "> Figure 9
<p>Visualization examples. An hour precipitation images measured and predicted during 21:41–22:40 BT (Beijing Time), 5 July 2018; 19:47–20:46 BT, 5 July 2018; 19:43–20:42 BT, 26 July 2018; 11:08–12:07 BT, 5 July 2018, from up to down, respectively. Label, the <math display="inline"><semantics> <mrow> <mi>Z</mi> <mo>−</mo> <mi>R</mi> </mrow> </semantics></math> relationship from left to right and the predictions of RRED-Net, respectively. The red box highlights parts of the images.</p> "> Figure 10
<p>The effect of factors (<span class="html-italic">d</span>, <span class="html-italic">p</span>, and <math display="inline"><semantics> <mi>β</mi> </semantics></math> (beta) in Equation (<a href="#FD12-remotesensing-15-01111" class="html-disp-formula">12</a>)) in regression focal loss on the model performance. (<b>a</b>–<b>g</b>) represents RMSE, bias, TS (0 mm), TS (≥0.1 mm/6 min), TS (≥1 mm/6 min), TS (≥2 mm/6 min), TS (≥3 mm/6 min), respectively. Gray dotted horizontal lines indicate the performance of the weighted model without using RF loss. The default of <span class="html-italic">d</span>, <span class="html-italic">p</span>, and <math display="inline"><semantics> <mi>β</mi> </semantics></math> are 2, 0.9, and 0.25, respectively.</p> ">
Abstract
:1. Introduction
2. Method
2.1. Denoise
2.2. Network Architecture
2.3. Long-Tailed Learning
3. Dataset and Features Selection
4. Evaluation Metrics
5. Experiments
5.1. Training Setting
5.2. Validation of the Features Selection
5.3. Comparison with Other DL Networks
5.4. Comparison with Based Method
5.5. Ablation Studies
6. Discussion
6.1. Preprocess Method
6.2. Long-Tailed Distribution in QPE
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhang, C.; Wang, H.; Zeng, J.; Ma, L.; Guan, L. Short-term dynamic radar quantitative precipitation estimation based on wavelet transform and support vector machine. J. Meteorol. Res. 2020, 34, 413–426. [Google Scholar] [CrossRef]
- Crosson, W.L.; Duchon, C.E.; Raghavan, R.; Goodman, S.J. Assessment of rainfall estimates using a standard ZR relationship and the probability matching method applied to composite radar data in central Florida. J. Appl. Meteorol. Climatol. 1996, 35, 1203–1219. [Google Scholar] [CrossRef]
- Zhang, J.; Howard, K.; Langston, C.; Vasiloff, S.; Kaney, B.; Arthur, A.; Van Cooten, S.; Kelleher, K.; Kitzmiller, D.; Ding, F.; et al. National Mosaic and Multi-Sensor QPE (NMQ) system: Description, results, and future plans. Bull. Am. Meteorol. Soc. 2011, 92, 1321–1338. [Google Scholar] [CrossRef] [Green Version]
- Peng, X.; Li, Q.; Jing, J. CNGAT: A Graph Neural Network Model for Radar Quantitative Precipitation Estimation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
- Sehad, M.; Lazri, M.; Ameur, S. Novel SVM-based technique to improve rainfall estimation over the Mediterranean region (north of Algeria) using the multispectral MSG SEVIRI imagery. Adv. Space Res. 2017, 59, 1381–1394. [Google Scholar] [CrossRef]
- Wang, Y.; Tang, L.; Chang, P.L.; Tang, Y.S. Separation of convective and stratiform precipitation using polarimetric radar data with a support vector machine method. Atmos. Meas. Tech. 2021, 14, 185–197. [Google Scholar] [CrossRef]
- Kuang, Q.; Yang, X.; Zhang, W.; Zhang, G. Spatiotemporal modeling and implementation for radar-based rainfall estimation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1601–1605. [Google Scholar] [CrossRef]
- Li, X.; Yang, Y.; Mi, J.; Bi, X.; Zhao, Y.; Huang, Z.; Liu, C.; Zong, L.; Li, W. Leveraging machine learning for quantitative precipitation estimation from Fengyun-4 geostationary observations and ground meteorological measurements. Atmos. Meas. Tech. 2021, 14, 7007–7023. [Google Scholar] [CrossRef]
- Kühnlein, M.; Appelhans, T.; Thies, B.; Nauß, T. Precipitation estimates from MSG SEVIRI daytime, nighttime, and twilight data with random forests. J. Appl. Meteorol. Climatol. 2014, 53, 2457–2480. [Google Scholar] [CrossRef] [Green Version]
- Min, M.; Bai, C.; Guo, J.; Sun, F.; Liu, C.; Wang, F.; Xu, H.; Tang, S.; Li, B.; Di, D.; et al. Estimating summertime precipitation from Himawari-8 and global forecast system based on machine learning. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2557–2570. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shi, X.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Adv. Neural Inf. Process. Syst. 2015, 28, 521. [Google Scholar]
- Sønderby, C.K.; Espeholt, L.; Heek, J.; Dehghani, M.; Oliver, A.; Salimans, T.; Agrawal, S.; Hickey, J.; Kalchbrenner, N. Metnet: A neural weather model for precipitation forecasting. arXiv 2020, arXiv:2003.12140. [Google Scholar] [CrossRef]
- Ravuri, S.; Lenc, K.; Willson, M.; Kangin, D.; Lam, R.; Mirowski, P.; Fitzsimons, M.; Athanassiadou, M.; Kashem, S.; Madge, S.; et al. Skilful precipitation nowcasting using deep generative models of radar. Nature 2021, 597, 672–677. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, J.; Zhu, H.; Long, M.; Wang, J.; Yu, P.S. Memory in memory: A predictive neural network for learning higher-order non-stationarity from spatiotemporal dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9154–9162. [Google Scholar]
- Wang, Y.; Long, M.; Wang, J.; Gao, Z.; Yu, P.S. Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. Adv. Neural Inf. Process. Syst. 2017, 30, 573. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Leinonen, J.; Nerini, D.; Berne, A. Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric Fields With a Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7211–7223. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv 2014, arXiv:1412.7062. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
- Chen, W.; Zhou, G.; Giannakis, G.B. Velocity and acceleration estimation of Doppler weather radar/lidar signals in colored noise. In Proceedings of the 1995 International Conference on Acoustics, Speech, and Signal Processing, Detroit, MI, USA, 9–12 May 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 3, pp. 2052–2055. [Google Scholar] [CrossRef]
- Dixon, M.; Hubbert, J. The separation of noise and signal components in Doppler radar returns. In Proceedings of the Proc. Seventh European Conf. on Radar in Meteorology and Hydrology, Toulouse, France, 24–29 June 2012. [Google Scholar]
- Gordon, W.B. An effect of receiver noise on the measurement of Doppler spectral parameters. Radio Sci. 1997, 32, 1409–1423. [Google Scholar] [CrossRef]
- Kent, J.T. Information Gain and a General Measure of Correlation. Biometrika 1983, 70, 163–173. [Google Scholar] [CrossRef]
- McHugh, M. The Chi-square test of independence. Biochem. Medica 2013, 23, 143–149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 603–612. [Google Scholar]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
- Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. Ce-net: Context encoder network for 2d medical image segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2292. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 17–23 June 2018; pp. 7794–7803. [Google Scholar]
- Ho, J.; Kalchbrenner, N.; Weissenborn, D.; Salimans, T. Axial attention in multidimensional transformers. arXiv 2019, arXiv:1912.12180. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Xiong, A.; Liu, N.; Liu, Y.; Zhi, S.; Wu, L.; Xin, Y.; Shi, Y.; Zhan, Y. QpefBD: A Benchmark Dataset Applied to Machine Learning for Minute-Scale Quantitative Precipitation Estimation and Forecasting. J. Meteorol. Res. 2022, 36, 93–106. [Google Scholar] [CrossRef]
- Omuya, E.O.; Okeyo, G.O.; Kimwele, M.W. Feature selection for classification using principal component analysis and information gain. Expert Syst. Appl. 2021, 174, 114765. [Google Scholar] [CrossRef]
- Shang, C.; Li, M.; Feng, S.; Jiang, Q.; Fan, J. Feature selection via maximizing global information gain for text classification. Knowl.-Based Syst. 2013, 54, 298–309. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in Pytorch. 2017. Available online: https://openreview.net/forum?id=BJJsrmfCZ (accessed on 14 January 2023).
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
- Radar observation of the atmosphere. L. J. Battan (The University of Chicago Press) 1973. PP X, 324; 125 figures, 21 tables. £7·15. Q. J. R. Meteorol. Soc. 1973, 99, 793. [CrossRef]
- Zhang, Y.; Kang, B.; Hooi, B.; Yan, S.; Feng, J. Deep long-tailed learning: A survey. arXiv 2021, arXiv:2110.04596. [Google Scholar] [CrossRef]
- Battaglia, P.W.; Hamrick, J.B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. Relational inductive biases, deep learning, and graph networks. arXiv 2018, arXiv:1806.01261. [Google Scholar] [CrossRef]
- An, G. The Effects of Adding Noise During Backpropagation Training on a Generalization Performance. Neural Comput. 1996, 8, 643–674. [Google Scholar] [CrossRef]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A.; Bottou, L. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
- Gulcehre, C.; Moczulski, M.; Denil, M.; Bengio, Y. Noisy activation functions. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 19–24 June 2016; pp. 3059–3068. [Google Scholar]
- Neelakantan, A.; Vilnis, L.; Le, Q.V.; Sutskever, I.; Kaiser, L.; Kurach, K.; Martens, J. Adding gradient noise improves learning for very deep networks. arXiv 2015, arXiv:1511.06807. [Google Scholar] [CrossRef]
- Tian, C.; Wang, W.; Zhu, X.; Wang, X.; Dai, J.; Qiao, Y. VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition. arXiv 2021, arXiv:2111.13579. [Google Scholar]
Feature | Full Name | Unit |
---|---|---|
VIL | Vertically integrated liquid | kg/m² |
HBR | Hybrid scan reflectivity | dBZ |
CR | Composite reflectivity | dBZ |
ET | Echo tops (18 dBZ) | km |
CAPPI | Constant altitude plan position indicator at the height with hundred meters | dBZ |
Feature | Conditional Entropy | Information Gain | Chi Square () |
---|---|---|---|
VIL | 1.3521 | 0.3467 | 1.62 |
HBR | 1.3496 | 0.3467 | 1.85 |
CR | 1.3347 | 0.3641 | 1.77 |
ET | 1.4145 | 0.2843 | 1.16 |
CAPPI20 | 1.5047 | 0.1941 | 1.12 |
CAPPI30 | 1.4095 | 0.2893 | 1.55 |
CAPPI40 | 1.3693 | 0.3295 | 1.56 |
CAPPI50 | 1.3929 | 0.3059 | 1.39 |
j = 1 | j = 2 | ... | j = 30 | ||
---|---|---|---|---|---|
i = 1 | ... | ||||
i = 2 | ... | ||||
... | ... | ... | ... | ... | |
i = 30 | ... | ||||
... | N |
Influence Degree of Each Radar Feature on Precipitation | |
---|---|
Information gain | CR > HBR > VIL > CAPPI40 > CAPPI50 > CAPPI30 > ET > CAPPI20 |
Chi-square | HBR > CR > VIL > CAPPI40 > CAPPI30 > CAPPI50 > CAPPI20 > ET |
Feature | RMSE | BIAS | 0 mm | ≥0.1 mm | ≥1 mm | ≥2 mm | ≥3 mm |
---|---|---|---|---|---|---|---|
(mm/6 min) | (mm/6 min) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | |
CR | 0.4508 | 0.0340 | 94.0/7.2/87.6 | 71.7/24.4/58.2 | 72.4/53.5/39.5 | 58.0/55.2/33.8 | 40.1/54.3/27.2 |
CAPPI20 | 0.4614 | 0.0115 | 92.6/8.3/85.5 | 70.0/27.6/55.2 | 70.0/27.6/55.2 | 65.6/50.9/39.0 | 33.0/50.9/24.6 |
ET | 0.4782 | 0.0163 | 93.4/7.8/86.6 | 71.0/25.4/57.2 | 58.8/58.5/32.1 | 39.2/59.7/24.8 | 22.6/58.8/17.1 |
Feature 1 | RMSE | BIAS | 0 mm | ≥0.1 mm | ≥1 mm | ≥2 mm | ≥3 mm |
---|---|---|---|---|---|---|---|
(mm/6 min) | (mm/6 min) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | |
CR | 0.4508 | 0.0340 | 94.0/7.2/87.6 | 71.7/24.4/58.2 | 72.4/53.5/39.5 | 58.0/55.2/33.8 | 40.1/54.3/27.2 |
+HBR | 0.5554 | 0.0372 | 89.8/11.5/80.4 | 75.8/21.9/62.5 | 73.6/49.9/42.5 | 58.4/53.0/35.2 | 39.9/52.4/27.7 |
+VIL | 0.5502 | 0.0282 | 90.4/11.9/80.6 | 74.7/21.1/62.2 | 72.5/49.1/42.7 | 56.5/51.6/35.2 | 38.5/51.0/27.5 |
+CAPPI40 | 0.5453 | 0.0291 | 90.4/11.8/80.7 | 74.7/21.2/62.2 | 72.8/48.6/43.3 | 57.0/51.3/35.6 | 40.4/51.1/28.4 |
+ET | 0.5445 | 0.0217 | 91.1/12.4/80.7 | 73.1/20.3/61.6 | 71.4/48.3/43.0 | 56.1/50.9/35.4 | 40.5/51.4/28.3 |
+CAPPI50 | 0.5340 | 0.0195 | 91.3/11.8/81.4 | 73.4/20.6/61.7 | 71.2/47.5/43.3 | 55.9/50.6/35.5 | 39.0/49.8/28.1 |
Feature | #params | RMSE | BIAS | 0 mm | ≥0.1 mm | ≥1 mm | ≥2 mm | ≥3 mm |
---|---|---|---|---|---|---|---|---|
(mm/6 min) | (mm/6 min) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | POD/FAR/TS(%) | ||
UNet* | 0.5472 | −0.0770 | 94.8/15.1/81.1 | 64.5/14.4/58.2 | 45.7/31.7/37.7 | 30.9/35.7/26.3 | 20.4/38.6/18.1 | |
UNet | 0.5428 | 0.0252 | 90.5/11.7/80.8 | 74.7/21.1/62.3 | 72.4/48.4/43.1 | 55.9/50.5/35.6 | 37.9/49.6/27.6 | |
CED | 0.5465 | 0.0342 | 89.7/11.5/80.4 | 75.6/22.2/62.6 | 73.5/50.0/42.3 | 57.3/51.9/35.4 | 38.0/49.9/27.5 | |
DeepLabV3 | 0.5603 | −0.0083 | 91.6/13.0/80.6 | 71.3/19.8/60.6 | 61.9/44.9/41.2 | 45.6/49.6/31.5 | 31.0/52.4/23.1 | |
RRED-Net | 0.5524 | 0.0354 | 90.7/11.8/80.8 | 75.8/21.9/62.7 | 72.9/48.8/43.1 | 60.8/54.1/35.7 | 45.7/54.5/29.6 |
RMSE | BIAS | 0 mm | ≥0.1 mm | ≥5 mm | ≥10 mm | ≥20 mm | |
---|---|---|---|---|---|---|---|
(mm/h) | (mm/h) | TS(%) | TS(%) | TS(%) | TS(%) | TS(%) | |
2.9354 | −0.5355 | 52.6 | 67.2 | 37.7 | 27.9 | 17.6 | |
RRED-Net | 2.5775 | 0.2436 | 81.2 | 67.0 | 57.1 | 51.0 | 39.6 |
WT | CC | RF | RMSE | BIAS | 0 mm | ≥0.1 mm | ≥1 mm | ≥2 mm | ≥3 mm |
---|---|---|---|---|---|---|---|---|---|
(mm/6 min) | (mm/6 min) | TS(%) | TS(%) | TS(%) | TS(%) | TS(%) | |||
0.5428 | 0.0252 | 80.8 | 62.3 | 43.1 | 35.6 | 27.6 | |||
✓ | 0.5428 | 0.0276 | 80.7 | 62.1 | 43.3 | 35.7 | 28.1 | ||
✓ | 0.5538 | 0.0364 | 81.0 | 62.5 | 43.1 | 35.7 | 29.4 | ||
✓ | 0.5667 | 0.0603 | 80.3 | 63.5 | 41.8 | 35.1 | 29.0 | ||
✓ | ✓ | ✓ | 0.5524 | 0.0354 | 80.8 | 62.4 | 43.1 | 35.7 | 29.6 |
RMSE | BIAS | 0 mm | ≥0.1 mm | ≥1 mm | ≥2 mm | ≥3 mm | |
---|---|---|---|---|---|---|---|
(mm/6 min) | (mm/6 min) | TS(%) | TS(%) | TS(%) | TS(%) | TS(%) | |
Axial | 0.5551 | 0.0458 | 80.3 | 62.2 | 42.7 | 35.6 | 29.1 |
CC | 0.5538 | 0.0364 | 81.0 | 62.5 | 43.1 | 35.7 | 29.4 |
RMSE | BIAS | 0 mm | ≥0.1 mm | ≥1 mm | ≥2 mm | ≥3 mm | |
---|---|---|---|---|---|---|---|
(mm/6 min) | (mm/6 min) | TS(%) | TS(%) | TS(%) | TS(%) | TS(%) | |
Resample | 0.5518 | 0.0421 | 79.9 | 62.1 | 42.4 | 35.0 | 28.0 |
RF | 0.5667 | 0.0603 | 80.3 | 63.5 | 41.8 | 35.1 | 29.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, W.; Hua, W.; Ge, M.; Su, F.; Liu, N.; Liu, Y.; Xiong, A. Severe Precipitation Recognition Using Attention-UNet of Multichannel Doppler Radar. Remote Sens. 2023, 15, 1111. https://doi.org/10.3390/rs15041111
Chen W, Hua W, Ge M, Su F, Liu N, Liu Y, Xiong A. Severe Precipitation Recognition Using Attention-UNet of Multichannel Doppler Radar. Remote Sensing. 2023; 15(4):1111. https://doi.org/10.3390/rs15041111
Chicago/Turabian StyleChen, Weishu, Wenjun Hua, Mengshu Ge, Fei Su, Na Liu, Yujia Liu, and Anyuan Xiong. 2023. "Severe Precipitation Recognition Using Attention-UNet of Multichannel Doppler Radar" Remote Sensing 15, no. 4: 1111. https://doi.org/10.3390/rs15041111
APA StyleChen, W., Hua, W., Ge, M., Su, F., Liu, N., Liu, Y., & Xiong, A. (2023). Severe Precipitation Recognition Using Attention-UNet of Multichannel Doppler Radar. Remote Sensing, 15(4), 1111. https://doi.org/10.3390/rs15041111