Abstract
Multimodal event detection plays a pivotal role in social media analysis, yet remains challenging due to the large differences between images and texts, noisy contexts and the intricate correspondence of different modalities. To address these issues, we introduce a multimodal graph message propagation network (MGMP), a layer-wise approach that aggregates the multi-view context and integrates images and texts simultaneously. In particular, MGMP constructs visual and textual graphs and employs graph neural network (GNN) with an element-wise attention to propagate context and avoid transferring negative knowledge, and multimodal similarity propagation (MSP) follows to propagate complementarity for fusing images and texts. We evaluate MGMP on two public datasets, namely CrisisMMD and SED2014. Extensive experiments demonstrate the effectiveness and superiority of our method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abavisani, M., Wu, L., Hu, S., Tetreault, J., Jaimes, A.: Multimodal categorization of crisis events in social media. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14679–14689 (2020)
Alam, F., Ofli, F., Imran, M.: CrisisMMD: multimodal Twitter datasets from natural disasters. In: Proceedings of the International AAAI Conference on Web and Social Media (ICWSM), pp. 465–473 (2018)
Bossard, L., Guillaumin, M., Van Gool, L.: Event recognition in photo collections with a stopwatch HMM. In: Proceedings of the IEEE International Conference on Computer Vision (CVPR), pp. 1193–1200 (2013)
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734 (2014)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)
Feng, X., Qin, B., Liu, T.: A language-independent neural network for event detection. Sci. China Inf. Sci. 61(9), 1–12 (2018). https://doi.org/10.1007/s11432-017-9359-x
Fukui, A., Park, D.H., Yang, D., Rohrbach, A., Darrell, T., Rohrbach, M.: Multimodal compact bilinear pooling for visual question answering and visual grounding. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 457–468 (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708 (2017)
Kiela, D., Grave, E., Joulin, A., Mikolov, T.: Efficient large-scale multi-modal classification. In: 32nd AAAI Conference on Artificial Intelligence (AAAI), pp. 5198–5204 (2018)
Lan, Z., Bao, L., Yu, S.-I., Liu, W., Hauptmann, A.G.: Multimedia classification and event detection using double fusion. Multimedia Tools Appl. 71(1), 333–347 (2013). https://doi.org/10.1007/s11042-013-1391-2
Li, W., Joo, J., Qi, H., Zhu, S.C.: Joint image-text news topic detection and tracking by multimodal topic and-or graph. IEEE Trans. Multimedia 19(2), 367–381 (2016)
Petkos, G., Papadopoulos, S., Mezaris, V., Kompatsiaris, Y.: Social event detection at MediaEva 2014: challenges, datasets, and evaluation. In: MediaEval Workshop. Citeseer (2014)
Qi, F., Yang, X., Zhang, T., Xu, C.: Discriminative multimodal embedding for event classification. Neurocomputing 395, 160–169 (2020)
Sakaki, T., Okazaki, M., Matsuo, Y.: Earthquake shakes Twitter users: real-time event detection by social sensors. In: Proceedings of the 19th International Conference on World Wide Web, pp. 851–860 (2010)
Schifanella, R., de Juan, P., Tetreault, J., Cao, L.: Detecting sarcasm in multimodal social platforms. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 1136–1145 (2016)
Wang, X.J., Ma, W.Y., Xue, G.R., Li, X.: Multi-model similarity propagation and its application for web image retrieval. In: Proceedings of the 12th Annual ACM International Conference on Multimedia, pp. 944–951 (2004)
Yang, Z., Li, Q., Liu, W., Lv, J.: Shared multi-view data representation for multi-domain event detection. IEEE Trans. Pattern Anal. Mach. Intell. 42(5), 1243–1256 (2019)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (No. 61806016).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, J., Wang, Y., Li, W. (2022). MGMP: Multimodal Graph Message Propagation Network for Event Detection. In: Þór Jónsson, B., et al. MultiMedia Modeling. MMM 2022. Lecture Notes in Computer Science, vol 13141. Springer, Cham. https://doi.org/10.1007/978-3-030-98358-1_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-98358-1_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-98357-4
Online ISBN: 978-3-030-98358-1
eBook Packages: Computer ScienceComputer Science (R0)