[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3677525.3678653acmconferencesArticle/Chapter ViewAbstractPublication PagesgooditConference Proceedingsconference-collections
research-article
Open access

Adequate Prompting Improves Performance of Regression Models of Emotional Content

Published: 04 September 2024 Publication History

Abstract

Regression models of natural language content are widely popular in many applications, such as sentiment analysis, stance detection and emotion detection. The multidimensional Russell and Mehrabian’s Valence-Arousal-Dominance (VAD) model has been universally recognized as a valuable tool for describing emotional content. Standard methods for VAD prediction learn the relation between natural language text and VAD values by fine-tuning a pre-trained machine learning model based on Transformers (e.g. BERT) on a corpus of natural language sentences manually annotated with VAD. We investigate the potential of employing effective prompting, a technique previously proven to be advantageous in classification and other natural language processing (NLP) tasks, to enhance the VAD prediction process. Our findings reveal that with appropriate prompting, we can leverage the knowledge acquired during pre-training to improve regression performance, showcasing the benefits of this approach for VAD prediction.

References

[1]
Sven Buechel and Udo Hahn. 2016. Emotion analysis as a regression problem–dimensional models and their implications on emotion representation and metrical evaluation. In ECAI 2016. IOS Press, 1114–1122.
[2]
Sven Buechel and Udo Hahn. 2022. Emobank: Studying the impact of annotation perspective and representation format on dimensional emotion analysis. arXiv preprint arXiv:2205.01996 (2022).
[3]
Yu-Ya Cheng, Yan-Ming Chen, Wen-Chao Yeh, and Yung-Chun Chang. 2021. Valence and arousal-infused bi-directional lstm for sentiment analysis of government social media management. Applied Sciences 11, 2 (2021), 880.
[4]
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019).
[5]
Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emotions. arXiv preprint arXiv:2005.00547 (2020).
[6]
Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion 6, 3-4 (1992), 169–200.
[7]
Giovanni Gafà, Francesco Cutugno, and Marco Venuti. 2023. EmotivITA at EVALITA2023: Overview of the Dimensional and Multidimensional Emotion Analysis Task. In Proceedings of the Eighth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2023), Mirko Lai, Stefano Menini, Marco Polignano, Valentina Russo, Rachele Sprugnoli, and Giulia Venturi (Eds.). CEUR.org, Parma, Italy.
[8]
Yusra Ghafoor, Shi Jinping, Fernando H Calderon, Yen-Hao Huang, Kuan-Ta Chen, and Yi-Shin Chen. 2023. TERMS: textual emotion recognition in multidimensional space. Applied Intelligence 53, 3 (2023), 2673–2693.
[9]
Atefeh Goshvarpour, Ataollah Abbasi, and Ateke Goshvarpour. 2017. An accurate emotion recognition system using ECG and GSR signals and matching pursuit method. Biomedical journal 40, 6 (2017), 355–368.
[10]
Le Hou, Chen-Ping Yu, and Dimitris Samaras. 2017. Squared earth movers distance loss for training deep neural networks on ordered-classes. In NIPS Workshop.
[11]
Zhaocheng Huang and Julien Epps. 2016. Detecting the instant of emotion change from speech using a martingale framework. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 5195–5199.
[12]
Nancy Ide, Collin Baker, Christiane Fellbaum, Charles Fillmore, and Rebecca Passonneau. 2008. MASC: The manually annotated sub-corpus of American English. In 6th International Conference on Language Resources and Evaluation, LREC 2008. European Language Resources Association (ELRA), 2455–2460.
[13]
Nancy Ide, Collin F Baker, Christiane Fellbaum, and Rebecca J Passonneau. 2010. The manually annotated sub-corpus: A community resource for and by the people. In Proceedings of the ACL 2010 conference short papers. 68–73.
[14]
Maximilian Köper and Sabine Schulte Im Walde. 2016. Automatically generated affective norms of abstractness, arousal, imageability and valence for 350 000 german lemmas. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16). 2595–2598.
[15]
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys 55, 9 (2023), 1–35.
[16]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
[17]
Gonçalo Azevedo Mendes and Bruno Martins. 2023. Quantifying Valence and Arousal in Text with Multilingual Pre-trained Transformers. In European Conference on Information Retrieval. Springer, 84–100.
[18]
Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation. 1–17.
[19]
Durgesh Nandini, Jyoti Yadav, Asha Rani, and Vijander Singh. 2023. Design of subject independent 3D VAD emotion detection system using EEG signals and machine learning algorithms. Biomedical Signal Processing and Control 85 (2023), 104894.
[20]
Sungjoon Park, Jiseon Kim, Seonghyeon Ye, Jaeyeol Jeon, Hee Young Park, and Alice Oh. 2019. Dimensional emotion detection from categorical emotion. arXiv preprint arXiv:1911.02499 (2019).
[21]
Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Theories of emotion. Elsevier, 3–33.
[22]
Jiri Pribil, Anna Pribilova, and Jindrich Matousek. 2019. Artefact determination by GMM-based continuous detection of emotional changes in synthetic speech. In 2019 42nd International Conference on Telecommunications and Signal Processing (TSP). IEEE, 45–48.
[23]
James A Russell and Albert Mehrabian. 1977. Evidence for a three-factor theory of emotions. Journal of research in Personality 11, 3 (1977), 273–294.
[24]
Klaus R Scherer and Harald G Wallbott. 1994. Evidence for universality and cultural variation of differential emotion response patterning.Journal of personality and social psychology 66, 2 (1994), 310.
[25]
Khushboo Singh, Mitul Kumar Ahirwal, and Manish Pandey. 2023. Quaternary classification of emotions based on electroencephalogram signals using hybrid deep learning model. Journal of Ambient Intelligence and Humanized Computing 14, 3 (2023), 2429–2441.
[26]
Carlo Strapparava and Rada Mihalcea. 2007. Semeval-2007 task 14: Affective text. In Proceedings of the fourth international workshop on semantic evaluations (SemEval-2007). 70–74.
[27]
Kai Sun, Junqing Yu, Yue Huang, and Xiaoqiang Hu. 2009. An improved valence-arousal emotion space for video affective content representation and recognition. In 2009 IEEE International Conference on Multimedia and Expo. IEEE, 566–569.
[28]
Imen Trabelsi, Dorra Ben Ayed, and Noureddine Ellouze. 2018. Evaluation of influence of arousal-valence primitives on speech emotion recognition.Int. Arab J. Inf. Technol. 15, 4 (2018), 756–762.
[29]
Jin Wang, Liang-Chih Yu, K Robert Lai, and Xuejie Zhang. 2016. Dimensional sentiment analysis using a regional CNN-LSTM model. In Proceedings of the 54th annual meeting of the association for computational linguistics (volume 2: Short papers). 225–230.
[30]
Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426 (2017).
[31]
Yi-Hsuan Yang and Homer H Chen. 2011. Prediction of the distribution of perceived music emotions using discrete samples. IEEE Transactions on Audio, Speech, and Language Processing 19, 7 (2011), 2184–2196.
[32]
Liang-Chih Yu, Jin Wang, K Robert Lai, and Xue-jie Zhang. 2015. Predicting valence-arousal ratings of words using a weighted graph method. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). 788–793.
[33]
Sicheng Zhao, Guoli Jia, Jufeng Yang, Guiguang Ding, and Kurt Keutzer. 2021. Emotion recognition from multiple modalities: Fundamentals and methodologies. IEEE Signal Processing Magazine 38, 6 (2021), 59–73.
[34]
Sicheng Zhao, Hongxun Yao, and Xiaolei Jiang. 2015. Predicting continuous probability distribution of image emotions in valence-arousal space. In Proceedings of the 23rd ACM international conference on Multimedia. 879–882.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
GoodIT '24: Proceedings of the 2024 International Conference on Information Technology for Social Good
September 2024
481 pages
ISBN:9798400710940
DOI:10.1145/3677525
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 September 2024

Check for updates

Author Tags

  1. Emotion detection
  2. NLI
  3. NLP
  4. VAD detection

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

GoodIT '24
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 77
    Total Downloads
  • Downloads (Last 12 months)77
  • Downloads (Last 6 weeks)22
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media