[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-540-74889-2_52guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Video Affective Content Representation and Recognition Using Video Affective Tree and Hidden Markov Models

Published: 12 September 2007 Publication History

Abstract

A video affective content representation and recognition framework based on Video Affective Tree (VAT) and Hidden Markov Models (HMMs) is presented. Video affective content units in different granularities are firstly located by excitement intensity curves, and then the selected affective content units are used to construct VAT. According to the excitement intensity curve the affective intensity of each affective content unit at different levels of VAT can also be quantified into several levels from weak to strong. Many middle-level audio and visual affective features, which represent emotional characteristics, are designed and extracted to construct observation vectors. Based on these observation vector sequences HMMs-based video affective content recognizers are trained and tested to recognize the basic emotional events of audience (joy, anger, sadness and fear). The experimental results show that the proposed framework is not only suitable for a broad range of video affective understanding applications, but also capable of representing affective semantics in different granularities.

References

[1]
Hanjalic, A.: Extracting Moods from Pictures and Sounds: Towards truly personalized TV. IEEE Signal Processing Magazine 3, 90-100 (2006).
[2]
Hanjalic, A., Xu, L.-Q.: Affective video content representation and modeling. IEEE Trans. Multimedia 2, 143-154 (2005).
[3]
Kang, H.-B.: Affective Content Detection using HMMs. In: Proceedings of the eleventh ACM international conference on Multimedia, vol. 259, pp. 2-8 (November 2003).
[4]
Murray, I.R., Arnott, J.L.: Implementation and testing of a system for producing emotionby-rule in synthetic speech. Speech Communication 16, 369-390 (1995).
[5]
Information Technology--Multimedia Content Description Interface--Part 4: Audio, ISO/IEC CD 15938-4 (2001).
[6]
Ngo, C.W., Pong, T.C., Chin, R.T.: Video partitioning by temporal slice coherency. IEEE Trans. Circuits Syst. Video Technol. 11(8), 941-953 (2001).
[7]
Ngo, C.W., Pong, T.C., Zhang, H.J.: Motion-based video representation for scene change detection. Int. J. Comput. Vis. 50(2), 11 (2002).
[8]
Rabiner, L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77(2), 256-286 (1989).
[9]
Junqing, Y., Yunfeng, H., Sun, K., Zhifang, W., Xiangmei, W.: Semantic Analysis and Retrieval of Sports Video. In: Proceeding of Japan-China Joint Workshop on Frontier of Computer Science and Technology, Aizu-Wakamatsu, pp. 97-108 (2006).
[10]
Sun, K., Junqing, Y., Ning, W.: Shot Boundary Detection and Key-frames Extraction Based on MPEG-7 Visual Descriptors. In: Proceeding of 3rd Conference on Intelligent CAD and Digital Entertainment (November 2006).
[11]
Goldstein, E.: Sensation and Perception. Brooks/Cole (1999).
[12]
Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005).
[13]
Boreczky, J., Wilcox, E.: A Hidden Markov Model Framework for Video Segmentation Using Audio and Image Features. In: Proc. ICASSP' 98 (1998).
[14]
Eickeler, S., Muller, S.: Content-based Video Indexing of TV Broadcast News Using Hidden Markov Models. In: Proc. ICASSP'99 (1999).
[15]
Naphade, M., Garg, A., Huang, T.: Audio-Visual Event Detection using Duration dependent input output Markov models. In: Proc. IEEE CBAIBL'01, Kauai, HI (2001).

Cited By

View all
  • (2019)Multi-modal learning for affective content analysis in moviesMultimedia Tools and Applications10.1007/s11042-018-5662-978:10(13331-13350)Online publication date: 1-May-2019
  • (2017)Exploring Domain Knowledge for Affective Video Content AnalysesProceedings of the 25th ACM international conference on Multimedia10.1145/3123266.3123352(769-776)Online publication date: 23-Oct-2017
  • (2016)Incorporating social media comments in affective video retrievalJournal of Information Science10.1177/016555151559368942:4(524-538)Online publication date: 1-Aug-2016
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
ACII '07: Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
September 2007
777 pages
ISBN:9783540748885

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 12 September 2007

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 30 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2019)Multi-modal learning for affective content analysis in moviesMultimedia Tools and Applications10.1007/s11042-018-5662-978:10(13331-13350)Online publication date: 1-May-2019
  • (2017)Exploring Domain Knowledge for Affective Video Content AnalysesProceedings of the 25th ACM international conference on Multimedia10.1145/3123266.3123352(769-776)Online publication date: 23-Oct-2017
  • (2016)Incorporating social media comments in affective video retrievalJournal of Information Science10.1177/016555151559368942:4(524-538)Online publication date: 1-Aug-2016
  • (2016)A framework for dynamic restructuring of semantic video analysis systems based on learning attention controlImage and Vision Computing10.1016/j.imavis.2015.07.00453:C(20-34)Online publication date: 1-Sep-2016
  • (2015)Video Affective Content Analysis: A Survey of State-of-the-Art MethodsIEEE Transactions on Affective Computing10.1109/TAFFC.2015.24327916:4(410-430)Online publication date: 23-Nov-2015
  • (2015)Multiple emotional tagging of multimedia data by exploiting dependencies among emotionsMultimedia Tools and Applications10.1007/s11042-013-1722-374:6(1863-1883)Online publication date: 1-Mar-2015
  • (2011)Affective content analysis of music video clipsProceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies10.1145/2072529.2072532(7-12)Online publication date: 30-Nov-2011
  • (2011)Affect-based adaptive presentation of home videosProceedings of the 19th ACM international conference on Multimedia10.1145/2072298.2072370(553-562)Online publication date: 28-Nov-2011
  • (2007)Video affective content recognition based on genetic algorithm combined HMMProceedings of the 6th international conference on Entertainment Computing10.5555/2394259.2394296(249-254)Online publication date: 15-Sep-2007

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media