[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Local and Global Context Reasoning for Spatio-Temporal Action Localization

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14361))

Included in the following conference series:

  • 861 Accesses

Abstract

Localizing persons and recognizing their actions from videos is an essential task in video understanding. Recent advances have been made by reasoning the relationships between the actor and another actor, as well as between the actor and the environment. However, reasoning the relationships globally over the image is not always the efficient way, and there are cases that locally searching for the relative clues is more suitable. In this paper, we move one step further and model the relationship between an actor and the actor’s relevant surrounding context. We developed a pipeline that observes over the full image to collect context information globally and around the actor to collect context information locally. This is achieved by implementing a Near-Actor Relation Network (NARN) that focuses on reasoning the context information locally. Two key components of our NARN enable the effective accumulation of the local context information: pose encoding, which encodes the human pose information as an additional feature, and spatial attention, which discriminates the relative context information from the others. Our pipeline accumulates the global and local relation information and gathers them for the final action classification. Experimental results on the JHMDB21 and AVA datasets demonstate that our proposed pipeline outperforms a baseline approach that only reasons about the global context. Visualization of the learned attention map indicates that our pipeline is able to focus on spatial areas that contains relative context information for each action.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 63.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 79.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lučić, M., Schmid, C.: ViViT: a video vision transformer. In: Proceedings of the International Conference on Computer Vision (ICCV), pp. 6836–6846 (2021)

    Google Scholar 

  2. Choutas, V., Weinzaepfel, P., Revaud, J., Schmid, C.: PoTion: pose motion representation for action recognition. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7024–7033 (2018)

    Google Scholar 

  3. Dai, X., Singh, B., Zhang, G., Davis, L.S., Qiu Chen, Y.: Temporal context network for activity localization in videos. In: Proceedings of the International Conference on Computer Vision (ICCV), pp. 5793–5802 (2017)

    Google Scholar 

  4. Fan, H., et al.: Multiscale vision transformers. In: Proceedings of the International Conference on Computer Vision (ICCV), pp. 6824–6835 (2021)

    Google Scholar 

  5. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Proceedings of the International Conference on Computer Vision (ICCV), pp. 6202–6211 (2019)

    Google Scholar 

  6. Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1933–1941 (2016)

    Google Scholar 

  7. Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: Video action transformer network. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 244–253 (2019)

    Google Scholar 

  8. Gu, C., et al.: AVA: a video dataset of spatio-temporally localized atomic visual actions. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6047–6056 (2018)

    Google Scholar 

  9. Jhuang, H., Gall, J., Zuffi, S., Schmid, C., Black, M.J.: Towards understanding action recognition. In: Proceedings of the International Conference on Computer Vision (ICCV), pp. 3192–3199 (2013)

    Google Scholar 

  10. Jocher, G.: YOLOv5 by Ultralytics (2020). https://doi.org/10.5281/zenodo.3908559, https://github.com/ultralytics/yolov5. Accessed July 2023

  11. Li, D., Qiu, Z., Dai, Q., Yao, T., Mei, T.: Recurrent tubelet proposal and recognition networks for action detection. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 303–318 (2018)

    Google Scholar 

  12. Lin, J., Gan, C., Han, S.: TSM: temporal shift module for efficient video understanding. In: Proceedings of the International Conference on Computer Vision (CVPR), pp. 7083–7093 (2019)

    Google Scholar 

  13. Pan, J., Chen, S., Shou, M.Z., Liu, Y., Shao, J., Li, H.: Actor-context-actor relation network for spatio-temporal action localization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 464–474 (2021)

    Google Scholar 

  14. Rajasegaran, J., Pavlakos, G., Kanazawa, A., Feichtenhofer, C., Malik, J.: On the benefits of 3D pose and tracking for human action recognition. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 640–649 (2023)

    Google Scholar 

  15. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems (NeurlPS), vol. 27 (2014)

    Google Scholar 

  16. Sun, C., Shrivastava, A., Vondrick, C., Murphy, K., Sukthankar, R., Schmid, C.: Actor-centric relation network. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 318–334 (2018)

    Google Scholar 

  17. Tang, J., Xia, J., Mu, X., Pang, B., Lu, C.: Asynchronous interaction aggregation for action detection. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12360, pp. 71–87. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58555-6_5

    Chapter  Google Scholar 

  18. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: Proceedings of the International Conference on Computer Vision (ICCV), pp. 4489–4497 (2015)

    Google Scholar 

  19. Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., et al.: Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 43(10), 3349–3364 (2020)

    Article  Google Scholar 

  20. Wu, C.Y., Feichtenhofer, C., Fan, H., He, K., Krahenbuhl, P., Girshick, R.: Long-term feature banks for detailed video understanding. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 284–293 (2019)

    Google Scholar 

  21. Xu, H., Das, A., Saenko, K.: R-C3D: Region convolutional 3D network for temporal activity detection. In: Proceedings of the International Conference on Computer Vision (CVPR), pp. 5783–5792 (2017)

    Google Scholar 

  22. Yang, X., Yang, X., Liu, M.Y., Xiao, F., Davis, L.S., Kautz, J.: STEP: spatio-temporal progressive learning for video action detection. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 264–272 (2019)

    Google Scholar 

  23. Zhou, H., et al.: Composer: compositional reasoning of group activity in videos with keypoint-only modality. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13695, pp. 249–266. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19833-5_15

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryuhei Ando .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ando, R., Babazaki, Y., Takahashi, K. (2023). Local and Global Context Reasoning for Spatio-Temporal Action Localization. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2023. Lecture Notes in Computer Science, vol 14361. Springer, Cham. https://doi.org/10.1007/978-3-031-47969-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47969-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47968-7

  • Online ISBN: 978-3-031-47969-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics