[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-030-66823-5_43guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

VisDrone-MOT2020: The Vision Meets Drone Multiple Object Tracking Challenge Results

Published: 23 August 2020 Publication History

Abstract

The Vision Meets Drone (VisDrone2020) Multiple Object Tracking (MOT) is the third annual UAV MOT tracking evaluation activity organized by the VisDrone team, in conjunction with European Conference on Computer Vision (ECCV 2020). The VisDrone-MOT2020 consists of 79 challenging video sequences, including 56 videos (24K frames) for training, 7 videos (3K frames) for validation and 17 videos (6K frames) for evaluation. All frames in these sequences are manually annotated with high-quality bounding boxes. Results of 12 participating MOT algorithms are presented and analyzed in detail. The challenging results, video sequences as well as the evaluation toolkit are made available at http://aiskyeye.com/. By holding VisDrone-MOT2020 challenge, we hope to facilitate future research and applications of MOT algorithms on drone videos.

References

[1]
Al-Shakarji, N.M., Bunyak, F., Seetharaman, G., Palaniappan, K.: Multi-object tracking cascade with multi-step data association and occlusion handling. In: AVSS (2018)
[2]
Al-Shakarji, N.M., Seetharaman, G., Bunyak, F., Palaniappan, K.: Robust multi-object tracking with semantic color correlation. In: AVSS (2017)
[3]
Bergmann, P., Meinhardt, T., Leal-Taixe, L.: Tracking without bells and whistles. In: ICCV (2019)
[4]
Bochinski, E., Eiselein, V., Sikora, T.: High-speed tracking-by-detection without using image information. In: AVSS (2017)
[5]
Brasó, G., Leal-Taixé, L.: Learning a neural solver for multiple object tracking. In: CVPR (2020)
[6]
Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: CVPR (2018)
[7]
Chang Z et al. Weighted bilinear coding over salient body parts for person re-identification Neurocomputing 2020 407 454-464
[8]
Chen, B., Deng, W., Hu, J.: Mixed high-order attention network for person re-identification. In: ICCV (2019)
[9]
Chen, K., et al.: Hybrid task cascade for instance segmentation. In: CVPR (2019)
[10]
Chu, P., Fan, H., Tan, C.C., Ling, H.: Online multi-object tracking with instance-aware tracker and dynamic model refreshment. In: WACV (2019)
[11]
Chu, P., Ling, H.: FAMNet: joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In: ICCV (2019)
[12]
Dave, A., Khurana, T., Tokmakov, P., Schmid, C., Ramanan, D.: TAO: a large-scale benchmark for tracking any object. arXiv (2020)
[13]
Dendorfer, P., et al.: MOT20: a benchmark for multi object tracking in crowded scenes. arXiv (2020)
[14]
Du D et al. Ferrari V, Hebert M, Sminchisescu C, Weiss Y, et al. The unmanned aerial vehicle benchmark: object detection and tracking Computer Vision – ECCV 2018 2018 Cham Springer 375-391
[15]
Evangelidis GD and Psarakis EZ Parametric image alignment using enhanced correlation coefficient maximization PAMI 2008 30 10 1858-1865
[16]
Fan, H., et al.: LaSOT: a high-quality benchmark for large-scale single object tracking. In: CVPR (2019)
[17]
Geiger A, Lenz P, Stiller C, and Urtasun R Vision meets robotics: the KITTI dataset Int. J. Robot. Res. 2013 32 11 1231-1237
[18]
Girshick, R.: Fast R-CNN. In: ICCV (2015)
[19]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
[20]
Hsieh, M.R., Lin, Y.L., Hsu, W.H.: Drone-based object counting by spatially regularized regional proposal network. In: ICCV (2017)
[21]
Keuper M, Tang S, Andres B, Brox T, and Schiele B Motion segmentation & multiple object tracking by correlation co-clustering PAMI 2018 42 1 140-153
[22]
Kim C, Li F, and Rehg JM Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Multi-object tracking with neural gating using bilinear LSTM Computer Vision – ECCV 2018 2018 Cham Springer 208-224
[23]
Kuhn HW The Hungarian method for the assignment problem Naval Res. Logist. Q. 1955 2 1–2 83-97
[24]
Li, J., Wang, J., Tian, Q., Gao, W., Zhang, S.: Global-local temporal representations for video person re-identification. In: ICCV (2019)
[25]
Li, J., Zhang, S., Huang, T.: Multi-scale 3D convolution network for video based person re-identification. In: AAAI (2019)
[26]
Li, S., Yu, H., Hu, H.: Appearance and motion enhancement for video-based person re-identification. In: AAAI (2020)
[27]
Li, W., Zhao, R., Xiao, T., Wang, X.: DeepReID: deep filter pairing neural network for person re-identification. In: CVPR (2014)
[28]
Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)
[29]
Lin TY et al. Fleet D, Pajdla T, Schiele B, Tuytelaars T, et al. Microsoft COCO: common objects in context Computer Vision – ECCV 2014 2014 Cham Springer 740-755
[30]
Liu W et al. Leibe B, Matas J, Sebe N, Welling M, et al. SSD: single shot multibox detector Computer Vision – ECCV 2016 2016 Cham Springer 21-37
[31]
Luo, H., Gu, Y., Liao, X., Lai, S., Jiang, W.: Bag of tricks and a strong baseline for deep person re-identification. In: CVPRW (2019)
[32]
Milan, A., Leal-Taixé, L., Reid, I., Roth, S., Schindler, K.: MOT16: a benchmark for multi-object tracking. arXiv (2016)
[33]
Mueller M, Smith N, and Ghanem B Leibe B, Matas J, Sebe N, and Welling M A benchmark and simulator for UAV tracking Computer Vision – ECCV 2016 2016 Cham Springer 445-461
[34]
Pan, S., Tong, Z., Zhao, Y., Zhao, Z., Su, F., Zhuang, B.: Multi-object tracking hierarchically in visual data taken from drones. In: ICCVW (2019)
[35]
Park, E., Liu, W., Russakovsky, O., Deng, J., Li, F.F., Berg, A.: Large Scale Visual Recognition Challenge 2017. http://image-net.org/challenges/LSVRC/2017
[36]
Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv (2018)
[37]
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)
[38]
Robicquet A, Sadeghian A, Alahi A, and Savarese S Leibe B, Matas J, Sebe N, and Welling M Learning social etiquette: human trajectory understanding in crowded scenes Computer Vision – ECCV 2016 2016 Cham Springer 549-565
[39]
Wang, G., Wang, Y., Zhang, H., Gu, R., Hwang, J.: Exploit the connectivity: multi-object tracking with trackletnet. In: ACM MM, pp. 482–490 (2019)
[40]
Wang, G., Yuan, Y., Chen, X., Li, J., Zhou, X.: Learning discriminative features with multiple granularities for person re-identification. In: ACM MM (2018)
[41]
Wang, J., et al.: Deep high-resolution representation learning for visual recognition. PAMI (2020)
[42]
Wen L et al. UA-DETRAC: a new benchmark and protocol for multi-object detection and tracking Comput. Vis. Image Underst. 2020 193 102907
[43]
Wen, L., Du, D., Li, S., Bian, X., Lyu, S.: Learning non-uniform hypergraph for multi-object tracking. In: AAAI, pp. 8981–8988 (2019)
[44]
Wen, L., Li, W., Yan, J., Lei, Z., Yi, D., Li, S.Z.: Multiple target tracking based on undirected hierarchical relation hypergraph. In: CVPR (2014)
[45]
Wen, L., Zhang, Y., Bo, L., Shi, H., Zhu, R., et al.: VisDrone-MOT2019: the vision meets drone multiple object tracking challenge results. In: ICCVW, pp. 189–198 (2019)
[46]
Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. In: ICIP (2017)
[47]
Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: CVPR (2013)
[48]
Wu Y and He K Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Group normalization Computer Vision – ECCV 2018 2018 Cham Springer 3-19
[49]
Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: CVPR (2017)
[50]
Yang, Y., Wen, L., Lyu, S., Li, S.Z.: Unsupervised learning of multi-level descriptors for person re-identification. In: AAAI (2017)
[51]
Zhan, Y., Wang, C., Wang, X., Zeng, W., Liu, W.: A simple baseline for multi-object tracking. arXiv (2020)
[52]
Zhao, L., Li, X., Zhuang, Y., Wang, J.: Deeply-learned part-aligned representations for person re-identification. In: ICCV (2017)
[53]
Zhao, R., Ouyang, W., Wang, X.: Unsupervised salience learning for person re-identification. In: CVPR (2013)
[54]
Zhou, K., Yang, Y., Cavallaro, A., Xiang, T.: Omni-scale feature learning for person re-identification. In: ICCV (2019)
[55]
Zhou, Q., et al.: Graph correspondence transfer for person re-identification. In: AAAI (2018)
[56]
Zhou, X., Koltun, V., Krähenbühl, P.: Tracking objects as points. arXiv (2020)
[57]
Zhu J, Yang H, Liu N, Kim M, Zhang W, and Yang M-H Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Online multi-object tracking with dual matching attention networks Computer Vision – ECCV 2018 2018 Cham Springer 379-396
[58]
Zhu, P., Wen, L., Du, D., Bian, X., Hu, Q., Ling, H.: Vision meets drones: past, present and future. CoRR abs/2001.06303 (2020)
[59]
Zhu P et al. Leal-Taixé L, Roth S, et al. VisDrone-VDT2018: the vision meets drone video detection and tracking challenge results Computer Vision – ECCV 2018 Workshops 2019 Cham Springer 496-518

Index Terms

  1. VisDrone-MOT2020: The Vision Meets Drone Multiple Object Tracking Challenge Results
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        Computer Vision – ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part IV
        Aug 2020
        776 pages
        ISBN:978-3-030-66822-8
        DOI:10.1007/978-3-030-66823-5

        Publisher

        Springer-Verlag

        Berlin, Heidelberg

        Publication History

        Published: 23 August 2020

        Author Tags

        1. Drone-based multiple object tracking
        2. Drone
        3. Performance evaluation

        Qualifiers

        • Article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 02 Feb 2025

        Other Metrics

        Citations

        View Options

        View options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media