[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3234804.3234807acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicdltConference Proceedingsconference-collections
research-article

A Wide Residual Network for Sentiment Classification

Published: 27 June 2018 Publication History

Abstract

Recurrent neural network (RNN) is a popular deep learning model for sentiment classification. Most RNN models benefit from the power of the depth of the deep learning network. However, training a sufficiently deep RNN using one word-embedding vector has a problem of feature reuse. In addition, deeper neural networks are more difficult to train. To overcome these problems, in this paper, we describe a sentiment classification model that combines a wide word embedding network architecture and a residual RNN. Firstly, more than one word-embedding vectors are pre-trained to obtain a larger feature space; Secondly, extend the RNN to be a deeper architecture, then employ a residual learning to ease the deep training of networks. Finally, experimental evaluations are conducted on two benchmarks, the results of the proposed networks demonstrate a good applicability for the sentiment classification problem.

References

[1]
A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, "Learning word vectors for sentiment analysis," in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, 2011, pp. 142--150
[2]
M. Ghiassi, J. Skinner, and D. Zimbra, "Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network," Expert Systems with applications, vol. 40, no. 16, pp. 6266--6282, 2013.
[3]
Liu, Shenghua, et al. "Adaptive co-training SVM for sentiment classification on tweets." (2013):2079--2088.
[4]
Tan, Songbo, et al. "Adapting Naive Bayes to Domain Adaptation for Sentiment Analysis." Advances in Information Retrieval, European Conference on Ir Research, ECIR 2009, Toulouse, France, April 6-9, 2009. Proceedings DBLP, 2009:337--349.
[5]
Kim, Yoon. "Convolutional Neural Networks for Sentence Classification."Eprint Arxiv (2014).
[6]
Ji, Young Lee, and F. Dernoncourt. "Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks." (2016):515--520.
[7]
Cao, W., A. Song, and J. Hu. "Stacked residual recurrent neural network with word weight for text classification." Iaeng International Journal of Computer Science 44.3(2017):277--284.
[8]
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016
[9]
S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735--1780, 1997.
[10]
https://www.kaggle.com/crowdflower/twitter-airline-sentiment
[11]
https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge
[12]
https://nlp.stanford.edu/projects/glove/
[13]
Q. V. Le and T. Mikolov, "Distributed representations of sentences and documents." in ICML, vol. 14, 2014, pp. 1188--1196.
[14]
https://code.google.com/archive/p/word2vec/
[15]
https://s3-us-west-1.amazonaws.com/fasttext-vectors/crawl-300d-2M.vec.zip
[16]
Liu W, Chen L T, Cai H B, et al. Adaptive registration of diffusion tensor images on lie groups{J}. Optical Review, 2016, 23(4):1--14.
[17]
Dai J, Xu Q. Attribute selection based on information gain ratio in fuzzy rough set theory with application to tumor classification{J}. Applied Soft Computing, 2013, 13(1):211--221.

Cited By

View all
  • (2020)Deep Flexible Sequential (DFS) Model for Air Pollution ForecastingScientific Reports10.1038/s41598-020-60102-610:1Online publication date: 25-Feb-2020
  • (2020)Sentiment analysis in textual, visual and multimodal inputs using recurrent neural networksMultimedia Tools and Applications10.1007/s11042-020-10037-xOnline publication date: 22-Oct-2020

Index Terms

  1. A Wide Residual Network for Sentiment Classification

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICDLT '18: Proceedings of the 2018 2nd International Conference on Deep Learning Technologies
    June 2018
    112 pages
    ISBN:9781450364737
    DOI:10.1145/3234804
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • Chongqing University of Posts and Telecommunications
    • University of Electronic Science and Technology of China: University of Electronic Science and Technology of China

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 June 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. RNN
    2. residual network
    3. semantic classification
    4. wide embedding

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICDLT '18

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)3
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 11 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2020)Deep Flexible Sequential (DFS) Model for Air Pollution ForecastingScientific Reports10.1038/s41598-020-60102-610:1Online publication date: 25-Feb-2020
    • (2020)Sentiment analysis in textual, visual and multimodal inputs using recurrent neural networksMultimedia Tools and Applications10.1007/s11042-020-10037-xOnline publication date: 22-Oct-2020

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media