[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-030-47436-2_59guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Fusion-Extraction Network for Multimodal Sentiment Analysis

Published: 11 May 2020 Publication History

Abstract

Multiple modality data bring new challenges for sentiment analysis, as combining varieties of information in an effective manner is a rigorous task. Previous works do not effectively utilize the relationship and influence between texts and images. This paper proposes a fusion-extraction network model for multimodal sentiment analysis. First, our model uses an interactive information fusion mechanism to interactively learn the visual-specific textual representations and the textual-specific visual representations. Then, we propose an information extraction mechanism to extract valid information and filter redundant parts for the specific textual and visual representations. The experimental results on two public multimodal sentiment datasets show that our model outperforms existing state-of-the-art methods.

References

[1]
Agarwal B, Poria S, Mittal N, Gelbukh A, and Hussain A Concept-level sentiment analysis with dependency-based semantic parsing: a novel approach Cogn. Comput. 2015 7 4 487-499
[2]
Borth, D., Ji, R., Chen, T., Breuel, T., Chang, S.F.: Large-scale visual sentiment ontology and detectors using adjective noun pairs. In: In ACM MM. Citeseer (2013)
[3]
Cai, G., Xia, B.: Convolutional neural networks for multimedia sentiment analysis. In: Li, J., Ji, H., Zhao, D., Feng, Y. (eds.) NLPCC-2015. LNCS (LNAI), vol. 9362, pp. 159–167. Springer, Cham (2015). 10.1007/978-3-319-25207-0_14
[4]
Cambria, E., Poria, S., Bajpai, R., Schuller, B.: SenticNet 4: a semantic resource for sentiment analysis based on conceptual primitives. In: COLING (2016)
[5]
Cambria, E., Poria, S., Hazarika, D., Kwok, K.: SenticNet 5: discovering conceptual primitives for sentiment analysis by means of context embeddings. In: AAAI (2018)
[6]
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)
[7]
Fan, F., Feng, Y., Zhao, D.: Multi-grained attention network for aspect-level sentiment classification. In: EMNLP, pp. 3433–3442 (2018)
[8]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
[9]
He K, Zhang X, Ren S, and Sun J Leibe B, Matas J, Sebe N, and Welling M Identity mappings in deep residual networks Computer Vision – ECCV 2016 2016 Cham Springer 630-645
[10]
Johnson, R., Zhang, T.: Semi-supervised convolutional neural networks for text categorization via region embedding. In: NIPS, pp. 919–927 (2015)
[11]
Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: ACL, vol. 1, pp. 655–665 (2014)
[12]
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
[13]
Li, Z., Wei, Y., Zhang, Y., Yang, Q.: Hierarchical attention transfer network for cross-domain sentiment classification. In: AAAI (2018)
[14]
Liu B Sentiment analysis and opinion mining Synth. Lect. Hum. Lang. Technol. 2012 5 1 1-167
[15]
Niu T, Zhu S, Pang L, and El Saddik A Tian Q, Sebe N, Qi G-J, Huet B, Hong R, and Liu X Sentiment analysis on multi-view social data MultiMedia Modeling 2016 Cham Springer 15-27
[16]
Owoputi, O., O’Connor, B., Dyer, C., Gimpel, K., Schneider, N., Smith, N.A.: Improved part-of-speech tagging for online conversational text with word clusters. In: NAACL-HLT, pp. 380–390 (2013)
[17]
Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: EMNLP, pp. 1532–1543 (2014)
[18]
Russakovsky O et al.ImageNet large scale visual recognition challengeInt. J. Comput. Vis.20151153211-2523422482
[19]
Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)
[20]
Truong, Q.T., Lauw, H.W.: VistaNet: visual aspect attention network for multimodal sentiment analysis. In: AAAI (2019)
[21]
Wilson, T., Wiebe, J., Hoffmann, P.: Recognizing contextual polarity in phrase-level sentiment analysis. In: EMNLP (2005)
[22]
Xu, N., Mao, W.: MultiSentiNet: a deep semantic network for multimodal sentiment analysis. In: CIKM, pp. 2399–2402. ACM (2017)
[23]
Xu, N., Mao, W., Chen, G.: A co-memory network for multimodal sentiment analysis. In: SIGIR, pp. 929–932. ACM (2018)
[24]
Xu, N., Mao, W., Chen, G.: Multi-interactive memory network for aspect based multimodal sentiment analysis. In: AAAI (2019)
[25]
Xue, W., Li, T.: Aspect based sentiment analysis with gated convolutional networks. In: ACL, pp. 2514–2523 (2018)
[26]
You, Q., Jin, H., Luo, J.: Visual sentiment analysis by attending on local image regions. In: AAAI (2017)
[27]
Yu Y, Lin H, Meng J, and Zhao ZVisual and textual sentiment analysis of a microblog using deep convolutional neural networksAlgorithms201692413517461
[28]
Zadeh, A., Chen, M., Poria, S., Cambria, E., Morency, L.P.: Tensor fusion network for multimodal sentiment analysis. In: EMNLP, pp. 1103–1114 (2017)

Cited By

View all
  • (2024)A Multimodal Sentiment Analysis Method Integrating Multi-Layer Attention Interaction and Multi-Feature EnhancementInternational Journal of Information Technologies and Systems Approach10.4018/IJITSA.33594017:1(1-20)Online publication date: 30-Jan-2024
  • (2024)Multi-attention Fusion for Multimodal Sentiment ClassificationProceedings of 2024 ACM ICMR Workshop on Multimodal Video Retrieval10.1145/3664524.3675360(1-7)Online publication date: 10-Jun-2024
  • (2024)Predicting Micro-video Popularity via Multi-modal Retrieval AugmentationProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657929(2579-2583)Online publication date: 10-Jul-2024
  • Show More Cited By

Index Terms

  1. Fusion-Extraction Network for Multimodal Sentiment Analysis
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Guide Proceedings
          Advances in Knowledge Discovery and Data Mining: 24th Pacific-Asia Conference, PAKDD 2020, Singapore, May 11–14, 2020, Proceedings, Part II
          May 2020
          935 pages
          ISBN:978-3-030-47435-5
          DOI:10.1007/978-3-030-47436-2
          • Editors:
          • Hady W. Lauw,
          • Raymond Chi-Wing Wong,
          • Alexandros Ntoulas,
          • Ee-Peng Lim,
          • See-Kiong Ng,
          • Sinno Jialin Pan

          Publisher

          Springer-Verlag

          Berlin, Heidelberg

          Publication History

          Published: 11 May 2020

          Author Tags

          1. Sentiment analysis
          2. Multimodal
          3. Fusion-Extraction Model

          Qualifiers

          • Article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 04 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)A Multimodal Sentiment Analysis Method Integrating Multi-Layer Attention Interaction and Multi-Feature EnhancementInternational Journal of Information Technologies and Systems Approach10.4018/IJITSA.33594017:1(1-20)Online publication date: 30-Jan-2024
          • (2024)Multi-attention Fusion for Multimodal Sentiment ClassificationProceedings of 2024 ACM ICMR Workshop on Multimodal Video Retrieval10.1145/3664524.3675360(1-7)Online publication date: 10-Jun-2024
          • (2024)Predicting Micro-video Popularity via Multi-modal Retrieval AugmentationProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657929(2579-2583)Online publication date: 10-Jul-2024
          • (2022)Two-stage Attention-based Fusion Neural Network for Image-Text Sentiment ClassificationProceedings of the 2022 4th International Conference on Image, Video and Signal Processing10.1145/3531232.3531233(1-7)Online publication date: 18-Mar-2022
          • (2022)An Efficient Fusion Mechanism for Multimodal Low-resource SettingProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3531900(2583-2588)Online publication date: 6-Jul-2022
          • (2022)A soft voting ensemble learning-based approach for multimodal sentiment analysisNeural Computing and Applications10.1007/s00521-022-07451-734:21(18391-18406)Online publication date: 1-Nov-2022
          • (2022)Momentum Distillation Improves Multimodal Sentiment AnalysisPattern Recognition and Computer Vision10.1007/978-3-031-18907-4_33(423-435)Online publication date: 14-Oct-2022
          • (2021)Systematic reviews in sentiment analysis: a tertiary studyArtificial Intelligence Review10.1007/s10462-021-09973-354:7(4997-5053)Online publication date: 1-Oct-2021

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media