Abstract
We propose a deep neural network which captures latent temporal features suitable for localizing actions temporally in streaming videos. This network uses unsupervised generative models containing autoencoders and conditional restricted Boltzmann machines to model temporal structure present in an action. Human motions are non-linear in nature, and thus require continuous temporal model representation of motion which are crucial for streaming videos. The generative ability would help predict features at future time steps which can give an indication of completion of action at any instant. To accumulate M classes of action, we train an autencoder to seperate out actions spaces, and learn generative models per action space. The final layer accumulates statistics from each model, and estimates action class and percentage of completion in a segment of frames. Experimental results prove that this network provides a good predictive and recognition capability required for action localization in streaming videos.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Sutskever, I., Hinton, G.E.: Learning multilevel distributed representations for high-dimensional sequences. In: Meila, M., Shen, X., (eds.) Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS 2007), vol. 2, pp. 548–555 (2007). Journal of Machine Learning Research - Proceedings Track
Tang, K., Fei-Fei, L., Koller, D.: Learning latent temporal structure for complex event detection. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1250–1257 (2012)
Gong, D., Medioni, G., Zhao, X.: Structured time series analysis for human action segmentation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 36, 1414–1427 (2014)
Chan-Hon-Tong, A., Achard, C., Lucat, L.: Simultaneous segmentation and classification of human actions in video streams using deeply optimized hough transform. Pattern Recogn. 47, 3807–3818 (2014)
Shao, L., Ji, L., Liu, Y., Zhang, J.: Human action segmentation and recognition via motion and shape analysis. Pattern Recogn. Lett. 33, 438–445 (2012). Intelligent Multimedia Interactivity
Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 3551–3558 (2013)
Shao, L., Zhen, X., Tao, D., Li, X.: Spatio-temporal laplacian pyramid coding for action recognition. IEEE Trans. Cybern. 44, 817–827 (2014)
Pirsiavash, H., Ramanan, D.: Parsing videos of actions with segmental grammars. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 612–619 (2014)
Kläser, A., Marszałek, M., Schmid, C., Zisserman, A.: Human focused action localization in video. In: Kutulakos, K.N. (ed.) ECCV 2010. LNCS, vol. 6553, pp. 219–233. Springer, Heidelberg (2012). doi:10.1007/978-3-642-35749-7_17
Tran, D., Yuan, J.: Max-margin structured output regression for spatio-temporal action localization. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K., (eds). Advances in Neural Information Processing Systems 25, pp. 350–358. Curran Associates, Inc. (2012)
Jain, M., van Gemert, J., Jegou, H., Bouthemy, P., Snoek, C.: Action localization with tubelets from motion. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 740–747 (2014)
Nair, B.M., Asari, V.K.: Learning and association of features for action recognition in streaming video. In: Bebis, G., et al. (eds.) ISVC 2014. LNCS, vol. 8888, pp. 642–651. Springer, Heidelberg (2014). doi:10.1007/978-3-319-14364-4_62
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Schölkopf, B., Platt, J.C., Hoffman, T. (eds.) Advances in Neural Information Processing Systems 19, pp. 153–160. MIT Press, Cambridge (2007)
Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14, 1771–1800 (2002)
Taylor, G.W., Hinton, G.E.: Factored conditional restricted Boltzmann machines for modeling motion style. In: Proceedings of the 26th Annual International Conference on Machine Learning. ICML 2009, pp. 1025–1032. ACM, New York (2009)
Taylor, G.W., Hinton, G.E., Roweis, S.T.: Modeling human motion using binary latent variables. In: Neural Information Processing Systems, pp. 1345–1352 (2006)
Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 3, pp. 32–36 (2004)
Rodriguez, M., Ahmed, J., Shah, M.: Action MACH a spatio-temporal maximum average correlation height filter for action recognition. In: Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8 (2008)
Wang, J., Chen, Z., Wu, Y.: Action recognition with multiscale spatio-temporal contexts. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3185–3192 (2011)
Yuan, C., Li, X., Hu, W., Ling, H., Maybank, S.: 3D R transform on spatio-temporal interest points for action recognition. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 724–730 (2013)
Jiang, Z., Lin, Z., Davis, L.: Recognizing human actions by learning and matching shape-motion prototype trees. IEEE Trans. Pattern Anal. Mach. Intell. 34, 533–547 (2012)
Acknowledgements
The author would like to thank PhD advisors Dr. Kimberly D. Kendricks, Dr. Keigo Hirakawa, and Dr. Vijayan Asari for the immense help and guidance in this research. This work is supported by Sensor Systems Division of University of Dayton Research Institute.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Nair, B.M. (2016). Unsupervised Deep Networks for Temporal Localization of Human Actions in Streaming Videos. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2016. Lecture Notes in Computer Science(), vol 10073. Springer, Cham. https://doi.org/10.1007/978-3-319-50832-0_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-50832-0_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-50831-3
Online ISBN: 978-3-319-50832-0
eBook Packages: Computer ScienceComputer Science (R0)