SEOpinion: Summarization and Exploration of Opinion from E-Commerce Websites
<p>The steps of the proposed approach: SEOpinion.</p> "> Figure 2
<p>Overview of the SEOpinion system.</p> "> Figure 3
<p>An example of a JSON (JavaScript Object Notation) object from Amazon.</p> "> Figure 4
<p>Example product details provided by manufacturers.</p> "> Figure 5
<p>An example of semantic similarity. Ac: candidate aspects; Ad: direct aspects.</p> "> Figure 6
<p>An example of lexical similarity.</p> "> Figure 7
<p>An example for subjectivity detection. DT: determiner; NN: noun; V: verb; ADJ: adjective;</p> "> Figure 8
<p>The proposed architecture.</p> "> Figure 9
<p>Screenshot of our SEOpinion system.</p> "> Figure 10
<p>Accuracies from the four representation types.</p> "> Figure 11
<p>Comparisons of the two word embedding-based deep learning models for our LC5 dataset.</p> "> Figure 12
<p>Accuracies from the sampling of different sizes for our dataset.</p> "> Figure 13
<p>Comparisons of the baseline models for the five research tasks.</p> "> Figure 14
<p>Area under the ROC curve on LC5 dataset with (area = 0.71).</p> ">
Abstract
:1. Introduction
- Create a web scraper to crawl the product details and reviews from e-commerce websites using XPath (XML path language).
- Construct a hierarchy of the relevant product aspects that are obtained from the product details and descriptions published in the web pages by the manufacturers.
- Map each review sentence directly to its corresponding aspect in the hierarchy. Thus, for each product aspect, the sentiment-score and opinionated sentences are shown.
- Create a corpus, which is obtained from the top five EC (laptops) websites, to validate the proposed approach.
- Our results showed that the usage of BERT [5] embedding in a recurrent neural network (RNN) model gave better results than convolutional neural network (CNN) and support vector machine (SVM) on our corpus.
2. Related Works
3. SEOpinion: Methodology
3.1. Overview
Algorithm 1. SEOpinion System. |
Input: P: A set of products web page templates of the same product type = {p1, p2 … pn} Output: SEO: Summarization and exploration opinions Method:
|
3.2. Web Scrapping
3.3. Hierarchical Aspect Extraction
3.3.1. Aspect Extraction
Algorithm 2. Hierarchical Aspect Extraction (Phase A). |
Input: D: A set of products details of the same product type ∈ P θ: Threshold score for aspect clustering Output: H: A hierarchical aspect set Method: //Task 1. Aspect Extraction
|
3.3.2. Hierarchical Clustering
3.4. Hierarchical Aspect-Based Opinion Summarization
Algorithm 3. Hierarchical Aspect-Based Opinion Summarization (Phase B). |
Input: H: A hierarchical aspect set Ri: A set of reviews from a given product pi ∈ P Output: Si: A hierarchical aspect-based summary ∈ pi Method: //Task 1. Subjectivity Classification
|
3.4.1. Subjectivity Detection
3.4.2. Aspect/Opinion Mapping
3.4.3. Aspect-Level Polarity Detection
3.5. User Interface
- The product presentation panel shows information about the product, such as its name, price, images, rate summary of its opinion sentences, and the number of these sentences in the top-level aspects (e.g., general, price, battery, memory, screen, and processor).
- The summarization panel displays the hierarchy aspects of the product and the aspect-based summary. Initially, sub-aspects are kept hidden until the user clicks on the related parent aspect. For example, “display,” “screen-size,” “resolution,” “technology,” and “touch-screen” are components or sub-aspects of the “screen.” For each top-level aspect, the total of sentences on each aspect is shown because it gives other users the confidence of the aspect rate (i.e., when the number of sentences increases, the user’s confidence in the rating aspect increases). Furthermore, the rated summary for each aspect is the average for the scores of its sentences. Our system considers positive = 5 and negative = 1. For example, as shown in Figure 9, the aspect “screen” contains five sub-aspects for five sentences, which include four positive and one negative. Thus, the average of all sentences for summarizing the aspect “screen” is calculated as (5 + 5 + 5 + 5 + 1)/5 = 4.2.
- The exploration panel shows the opinionated sentences that are categorized as positive or negative. Initially, this panel does not display these sentences of the product if no product aspect is selected. These sentences related to the aspect are shown in this panel only when the user clicks on the “view sentences” button of the aspect in the summarization panel.
4. Experiments
4.1. Data Collection and Preprocessing
4.2. Baseline
- Traditional machine learning: The SVM classifier is a state-of-the-art traditional machine learning method that exploits input features such as uni/bigram features and POS tags, as in [41], where the authors performed rapid dropout training by sampling or combining a Gaussian approximation. These measures were justified by central boundary theory and empirical evidence [41].
- Deep learning: CNN and RNN utilize word embedding as an input feature, in which the embedding is trained using random initialization, Global Vectors (GloVe) [42], and BERT embedding [5]. We used GloVe instead of Word2vec because it achieved better results [38]. The pre-train BERT word embedding [5] was used on the Amazon corpus.
4.3. Evaluation Measures
4.4. Experiment Setups
5. Experimental Results and Analysis
5.1. Results for Hierarchical Aspect Extraction
5.2. Results for Hierarchical Aspect-Based Opinion Summarization
5.3. Analysis of Results
6. Limitations and Future Directions
- The system only works well with web pages that have many product details (aspects) embedded in the page’ templates. Therefore, the worst result was for the eBay website, as shown in Figure 11, because that template had fewer details in its webpage templates than others.
- Unlike reviews, implicit aspects are difficult to be extracted from a template. In [47], a rule-based approach to obtain both explicit and implicit aspects from customer reviews was proposed.
- The opinion mapping task in our system worked on matching an opinion sentence with only one aspect. Some opinion sentences may express more than one aspect. For example, the opinion sentence “my phone is good for its price and performance” is associated with two aspects: “price” and “performance.”
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
EC | E-commerce |
AOS | Aspect-based Opinion Summarization |
SEOpinion | Summarization and Exploration of Opinions |
HAE | Hierarchical Aspect Extraction |
HAOS | Hierarchical Aspect-based Opinion Summarization |
BERT | Bidirectional Encoder Representations from Transformers |
GloVe | Global Vectors |
DL | Deep Learning |
CNN | Convolutional Neural Network |
RNN | Recurrent Neural Network |
SVM | Support Vector Machine |
LC5 | Laptop Collection from five EC websites |
XML | Extensible Markup Language |
URL | Uniform Resource Locator |
NLTK | Natural Language Toolkit |
ASCII | American Standard Code for Information Interchange |
ROC | Receiver Operating Characteristic |
References
- Sharma, G.; Lijuan, W. The effects of online service quality of e-commerce Websites on user satisfaction. Electron. Libr. 2015. [Google Scholar] [CrossRef]
- Yu, X.; Guo, S.; Guo, J.; Huang, X. Rank B2C e-commerce websites in e-alliance based on AHP and fuzzy TOPSIS. Expert Syst. Appl. 2011. [Google Scholar] [CrossRef]
- Oláh, J.; Kitukutha, N.; Haddad, H.; Pakurár, M.; Máté, D.; Popp, J. Achieving sustainable e-commerce in environmental, social and economic dimensions by taking possible trade-offs. Sustainability 2018, 11, 89. [Google Scholar] [CrossRef] [Green Version]
- Wu, P.; Li, X.; Shen, S.; He, D. Social media opinion summarization using emotion cognition and convolutional neural networks. Int. J. Inf. Manag. 2020, 51, 101978. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Ali, F.; El-Sappagh, S.; Islam, S.M.R.; Ali, A.; Attique, M.; Imran, M.; Kwak, K.S. An intelligent healthcare monitoring framework using wearable sensors and social networking data. Futur. Gener. Comput. Syst. 2020. [Google Scholar] [CrossRef]
- Sohangir, S.; Wang, D.; Pomeranets, A.; Khoshgoftaar, T.M. Big Data: Deep Learning for financial sentiment analysis. J. Big Data 2018. [Google Scholar] [CrossRef] [Green Version]
- Hussain, A.; Cambria, E. Semi-supervised learning for big social data analysis. Neurocomputing 2018. [Google Scholar] [CrossRef] [Green Version]
- Kim, H.D.; Ganesan, K.; Sondhi, P.; Zhai, C. Comprehensive Review of Opinion Summarization. Available online: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Comprehensive+Review+of+Opinion+Summarization&btnG= (accessed on 16 January 2021).
- Ma, Y.; Peng, H.; Khan, T.; Cambria, E.; Hussain, A. Sentic LSTM: A Hybrid Network for Targeted Aspect-Based Sentiment Analysis. Cognit. Comput. 2018. [Google Scholar] [CrossRef]
- Schouten, K.; Frasincar, F. Survey on aspect-level sentiment analysis. IEEE Trans. Knowl. Data Eng. 2016, 28, 813–830. [Google Scholar] [CrossRef]
- Zhu, L.; Gao, S.; Pan, S.J.; Li, H.; Deng, D.; Shahabi, C. Graph-based informative-sentence selection for opinion summarization. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2013, Niagara, ON, Canada, 25–28 August 2013. [Google Scholar]
- Yu, J.; Zha, Z.J.; Wang, M.; Wang, K.; Chua, T.S. Domain-assisted product aspect hierarchy generation: Towards hierarchical organization of unstructured consumer reviews. In Proceedings of the EMNLP’11: Proceedings of the Conference on Empirical Methods in Natural Language Processing, Edinburgh, UK, 27–29 July 2011; pp. 140–150. [Google Scholar]
- Bahrainian, S.A.; Dengel, A. Sentiment analysis and summarization of twitter data. In Proceedings of the 16th IEEE International Conference on Computational Science and Engineering, CSE 2013, Sydney, Australia, 3–5 December 2013. [Google Scholar]
- Jmal, J.; Faiz, R. Customer review summarization approach using twitter and sentiwordnet. In Proceedings of the 3rd International Conference on Web Intelligence, Mining and Semantics, Madrid, Spain, 12–14 June 2013. [Google Scholar]
- Pavlopoulos, J.; Androutsopoulos, I. Multi-granular aspect aggregation in aspect-based sentiment analysis. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Gothenburg, Sweden, 26–30 April 2014; pp. 78–87. [Google Scholar]
- Di Fabbrizio, G.; Stent, A.; Gaizauskas, R. A Hybrid Approach to Multi-document Summarization of Opinions in Reviews. In Proceedings of the 8th International Natural Language Generation Conference, Philadelphia, PA, USA, 19–21 June 2014; pp. 54–63. [Google Scholar]
- Konjengbam, A.; Dewangan, N.; Kumar, N.; Singh, M. Aspect ontology based review exploration. Electron. Commer. Res. Appl. 2018, 30, 62–71. [Google Scholar] [CrossRef]
- De Melo, T.; da Silva, A.S.; de Moura, E.S.; Calado, P. OpinionLink: Leveraging user opinions for product catalog enrichment. Inf. Process. Manag. 2019, 56, 823–843. [Google Scholar] [CrossRef]
- Yang, M.; Wang, X.; Lu, Y.; Lv, J.; Shen, Y.; Li, C. Plausibility-promoting generative adversarial network for abstractive text summarization with multi-task constraint. Inf. Sci. 2020. [Google Scholar] [CrossRef]
- Yang, M.; Li, C.; Shen, Y.; Wu, Q.; Zhao, Z.; Chen, X. Hierarchical Human-Like Deep Neural Networks for Abstractive Text Summarization. IEEE Trans. Neural Netw. Learn. Syst. 2020. [Google Scholar] [CrossRef]
- Kim, S.; Zhang, J.; Chen, Z.; Oh, A.; Liu, S. A hierarchical aspect-sentiment model for online reviews. In Proceedings of the 27th AAAI Conference on Artificial Intelligence AAAI 2013, Bellevue, WA, USA, 14–18 July 2013; pp. 526–533. [Google Scholar]
- Almars, A.; Li, X.; Zhao, X. Modelling user attitudes using hierarchical sentiment-topic model. Data Knowl. Eng. 2019, 119, 139–149. [Google Scholar] [CrossRef]
- Perera, R.; Malepathirana, T.; Abeysinghe, Y.; Albar, Y.; Thayasivam, U. Amalgamation of General and Domain Specific Word Embeddings for Improved Hierarchical Aspect Aggregation. In Proceedings of the IEEE 13th International Conference on Semantic Computing (ICSC), Newport Beach, CA, USA, 30 January–1 February 2019; pp. 55–62. [Google Scholar]
- Park, D.H.; Zhai, C.X.; Guo, L. SpecLDA: Modeling product reviews and specifications to generate augmented specifications. In Proceedings of the SIAM International Conference on Data Mining 2015, SDM 2015, Vancouver, BC, Canada, 2 May 2015; pp. 837–845. [Google Scholar]
- Amplayo, R.K.; Lee, S.; Song, M. Incorporating product description to sentiment topic models for improved aspect-based sentiment analysis. Inf. Sci. 2018, 454, 200–215. [Google Scholar] [CrossRef]
- Mitchell, R. Web Scraping with Python Collecting Data from the Modern Web; O’Reilly Media: Sebastopol, CA, USA, 2015; ISBN 9788578110796. [Google Scholar]
- Boeing, G.; Waddell, P. New Insights into Rental Housing Markets across the United States: Web Scraping and Analyzing Craigslist Rental Listings. J. Plan. Educ. Res. 2017. [Google Scholar] [CrossRef] [Green Version]
- Lerman, K.; Knoblock, C.; Minton, S. Automatic Data Extraction from Lists and Tables in Web Sources. In Proceedings of the Workshop on Advances in Text Extraction and Mining (IJCAI-2001), Seattle, WA, USA, 5 August 2001. [Google Scholar]
- Toutanova, K.; Klein, D.; Manning, C.D.; Singer, Y. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Edmonton, AB, Canada, 27 May–1 June 2003; pp. 173–180. [Google Scholar]
- Pessutto, L.R.C.; Vargas, D.S.; Moreira, V.P. Multilingual aspect clustering for sentiment analysis. Knowl. Based Syst. 2020, 192, 105339. [Google Scholar] [CrossRef]
- Zhai, Z.; Liu, B.; Xu, H.; Jia, P. Clustering product features for opinion mining. In Proceedings of the 4th ACM International Conference on Web Search and Data Mining, WSDM 2011, Hong Kong, China, 9–12 February 2011. [Google Scholar]
- Esuli, A.; Sebastiani, F. SENTIWORDNET: A publicly available lexical resource for opinion mining. In Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC 2006, Genoa, Italy, 22–28 May 2006. [Google Scholar]
- Gu, X.; Gu, Y.; Wu, H. Cascaded Convolutional Neural Networks for Aspect-Based Opinion Summary. Neural Process. Lett. 2017, 46, 581–594. [Google Scholar] [CrossRef]
- Wu, X.; Lü, H.T.; Zhuo, S.J. Sentiment analysis for Chinese text based on emotion degree lexicon and cognitive theories. J. Shanghai Jiaotong Univ. 2015. [Google Scholar] [CrossRef]
- Akhtar, M.S.; Ekbal, A.; Cambria, E. How Intense Are You? Predicting Intensities of Emotions and Sentiments using Stacked Ensemble [Application Notes]. IEEE Comput. Intell. Mag. 2020. [Google Scholar] [CrossRef]
- Vilares, D.; Alonso, M.A.; Gómez-Rodríguez, C. On the usefulness of lexical and syntactic processing in polarity classification of Twitter messages. J. Assoc. Inf. Sci. Technol. 2015. [Google Scholar] [CrossRef] [Green Version]
- Mabrouk, A.; Redondo, R.P.D.; Kayed, M. Deep Learning-Based Sentiment Classification: A Comparative Survey. IEEE Access 2020, 8, 85616–85638. [Google Scholar] [CrossRef]
- Gao, Z.; Feng, A.; Song, X.; Wu, X. Target-dependent sentiment classification with BERT. IEEE Access 2019, 7, 154290–154299. [Google Scholar] [CrossRef]
- Loper, E.; Bird, S. NLTK: The natural language toolkit. arXiv 2002, arXiv:cs/0205028. [Google Scholar]
- Wang, S.; Manning, C.D. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Jeju, Korea, 8–14 July 2012; pp. 90–94. [Google Scholar]
- Pennington, J.; Socher, R.; Manning, C.D. GloVe: Global vectors for word representation. In Proceedings of the EMNLP 2014—2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
- Paszke, A.; Chanan, G.; Lin, Z.; Gross, S.; Yang, E.; Antiga, L.; Devito, Z. Automatic differentiation in PyTorch. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Tang, D.; Wei, F.; Yang, N.; Zhou, M.; Liu, T.; Qin, B. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, MD, USA, 23–25 June 2014; pp. 1555–1565. [Google Scholar]
- Jianqiang, Z.; Xiaolin, G.; Xuejun, Z. Deep Convolution Neural Networks for Twitter Sentiment Analysis. IEEE Access 2018, 6, 23253–23260. [Google Scholar] [CrossRef]
- Poria, S.; Cambria, E.; Ku, L.W.; Gui, C.; Gelbukh, A. A Rule-Based Approach to Aspect Extraction from Product Reviews. In Proceedings of the Second Workshop on Natural Language Processing for Social Media (SocialNLP), Dublin, Ireland, 24 August 2014; pp. 28–37. [Google Scholar]
- Gan, C.; Wang, L.; Zhang, Z.; Wang, Z. Sparse attention based separable dilated convolutional neural network for targeted sentiment analysis. Knowl. Based Syst. 2020, 188, 104827. [Google Scholar] [CrossRef]
EC Website | Useful Data Parts | XPath Format |
---|---|---|
Amazon | Title | //span[@id=‘productTitle’]/text() |
About this item | //div[@id=‘feature-bullets’]/ul/li/span/text() | |
Compare with similar items | //table[@id=‘HLCXComparisonTable’]//tr/th/span/text() | |
Product description | //div[@id=‘productDescription’]/text() | |
Product information | //table[@id=‘productDetails_techSpec_section_1′]//tr/th/text() | |
Customer Reviews | //div[@data-hook=‘review-collapsed’]/span/text() | |
Flipkart | Title | //span[@class=‘_35KyD6′]/text() |
Highlights | //div[@class=‘_3WHvuP’]/ul/li/text() | |
Description | //div[@class=‘_3la3Fn _1zZOAc’]/p/text() | |
Specifications | //table[@class=‘_3ENrHu’]/tbody/tr/td[1]/text() | |
Customer Reviews | //div[@class=‘qwjRop’]/div/div/text() | |
eBay | Title | //h1[@id=‘itemTitle’]/text() |
Item specifics | //td[@class=‘attrLabels’]/text() | |
About this product | //div[@class=‘prodDetailSec’]/table/tbody/tr/td[1]/text() | |
Review Text | //div[@class=‘ebay-review-section-r’]/p/text() | |
Customer Reviews | //div[@class=‘ebay-review-section-r’]/p/text() | |
Walmart | Title | //h1[@itemprop=‘name’]/text() |
About This Item | //div[@class=‘about-desc about-product-description xs-margin-top’]/ul/li/text() | |
Specifications | //table[@class=‘product-specification-table table-striped’]/tbody/tr/td[1]/text() | |
Customer Reviews | //div[@class=‘review-text’]/p/text() | |
BestBuy | Title | //h1[@itemprop=‘name’]/text() |
Other Specifications | //table[@class=‘product-spec’]/tr/td[1]/text() | |
Description | //div[@itemprop=‘description’]/text() | |
Customer Reviews | //div[@class=‘user-review’]/p/text() |
Dataset (Domain) | No. Laptop Reviewed Items (Web Pages) | For Each One Item (Average) | Polarity | ||||
---|---|---|---|---|---|---|---|
No. Aspect Terms | No. Aspect Categories | No. Sentences | No. Sentences/Aspect Term | Positive % | Negative % | ||
Amazon | 707 | 42 | 11 | 3289 | 77.3 | 62 | 38 |
Flipkart | 284 | 55 | 8 | 546 | 6.9 | 69 | 31 |
eBay | 856 | 74 | 3 | 11 | 0.12 | 73 | 27 |
Walmart | 790 | 18 | 6 | 2180 | 97.1 | 62 | 38 |
BestBuy | 525 | 72 | 17 | 3574 | 43.4 | 67 | 33 |
Prediction Label | |||
---|---|---|---|
Positive | Negative | ||
Actual Label | Positive | TP | FN |
Negative | FP | TN |
Word Embedding | BERT [5] |
Dropout Rate | 0.1 |
Batch Size | Search from = {16,32} |
Learning Rate | Search from = {2 × 10−5, 3 × 10−5} |
Max Epoch | 6 |
Max Sequence Length | 128 |
Optimizer | Adam [44] |
Embedding Layer Dimension | 768 |
Deep Learning Framework | Pytorch [43] |
Model | Text Representation | Amazon | Flipkart | eBay | Walmart | BestBuy | Avgerage | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P | R | F | P | R | F | P | R | F | P | R | F | P | R | F | ||||
SVM | Hand-Crafted Features [41] | 62.4 | 62.5 | 62.4 | 64.0 | 65.0 | 64.5 | 62.4 | 62.5 | 62.4 | 62.3 | 62.1 | 62.2 | 59.4 | 59.5 | 59.4 | 62.2 | |
CNN | Embedding | Random | 72.4 | 66.0 | 69.1 | 70.3 | 71.0 | 70.6 | 63.2 | 62.6 | 62.9 | 60.5 | 58.9 | 59.7 | 57.1 | 53.0 | 55.0 | 63.5 |
GloVe [46] | 73.1 | 66.8 | 69.8 | 71.9 | 79.8 | 75.6 | 64.4 | 63.9 | 64.1 | 70.9 | 64 | 67.3 | 68.5 | 62.5 | 65.4 | 68.5 | ||
BERT (our) | 79.6 | 73.3 | 76.3 | 72.1 | 79.8 | 75.8 | 72.5 | 68.8 | 70.6 | 72.9 | 79.9 | 76.2 | 72.8 | 75.8 | 74.3 | 74.7 | ||
RNN | Random | 72.7 | 72.8 | 72.7 | 74.8 | 75.0 | 74.9 | 73.3 | 73.2 | 73.2 | 71.6 | 73.9 | 72.7 | 76.1 | 72.1 | 74.0 | 73.5 | |
GloVe [46] | 82.4 | 76.0 | 79.1 | 80.3 | 81.0 | 80.6 | 73.2 | 72.6 | 72.9 | 70.5 | 68.9 | 69.7 | 67.1 | 63.0 | 65.0 | 73.5 | ||
BERT (our) | 83.3 | 78.7 | 80.9 | 77.7 | 84.1 | 80.8 | 73.4 | 73.3 | 73.3 | 77.5 | 75.7 | 76.6 | 74.4 | 75.9 | 75.1 | 77.4 |
Model | Text Representation | Amazon | Flipkart | eBay | Walmart | BestBuy | Avgerage | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P | R | F | P | R | F | P | R | F | P | R | F | P | R | F | ||||
SVM | Hand-Crafted Features [41] | 73.9 | 74.1 | 74.0 | 72.3 | 75.3 | 73.8 | 65.6 | 61.0 | 63.2 | 71.5 | 73.5 | 72.5 | 71.4 | 73.4 | 72.4 | 71.2 | |
CNN | Embedding | Random | 79.2 | 89.7 | 84.1 | 79.3 | 75.7 | 77.5 | 64.9 | 55.5 | 59.8 | 76.5 | 76.0 | 76.2 | 74.6 | 79.2 | 76.8 | 75.0 |
GloVe [46] | 82.6 | 78.3 | 80.4 | 80.3 | 78.7 | 79.5 | 73.2 | 72.6 | 72.9 | 77.1 | 63.0 | 69.3 | 79.5 | 84.1 | 81.7 | 76.9 | ||
BERT (our) | 83.1 | 88.8 | 85.9 | 80.9 | 74.0 | 77.3 | 78.3 | 77.0 | 77.6 | 78.5 | 72.5 | 75.4 | 81.2 | 83.4 | 82.3 | 79.7 | ||
RNN | Random | 83.1 | 76.8 | 79.8 | 81.0 | 75.3 | 78.0 | 74.4 | 73.9 | 74.1 | 80.9 | 74.0 | 77.3 | 78.5 | 72.5 | 75.4 | 77.0 | |
GloVe [46] | 84.5 | 81.7 | 83.1 | 81.6 | 78.3 | 79.9 | 82.5 | 78.8 | 80.6 | 78.7 | 75.5 | 77.1 | 80.5 | 75.7 | 78.0 | 79.8 | ||
BERT (our) | 86.1 | 85.9 | 86.0 | 76.3 | 79.8 | 78.0 | 84.5 | 85.3 | 84.9 | 80.0 | 77.5 | 78.7 | 82.0 | 88.5 | 85.1 | 82.6 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mabrouk, A.; Redondo, R.P.D.; Kayed, M. SEOpinion: Summarization and Exploration of Opinion from E-Commerce Websites. Sensors 2021, 21, 636. https://doi.org/10.3390/s21020636
Mabrouk A, Redondo RPD, Kayed M. SEOpinion: Summarization and Exploration of Opinion from E-Commerce Websites. Sensors. 2021; 21(2):636. https://doi.org/10.3390/s21020636
Chicago/Turabian StyleMabrouk, Alhassan, Rebeca P. Díaz Redondo, and Mohammed Kayed. 2021. "SEOpinion: Summarization and Exploration of Opinion from E-Commerce Websites" Sensors 21, no. 2: 636. https://doi.org/10.3390/s21020636
APA StyleMabrouk, A., Redondo, R. P. D., & Kayed, M. (2021). SEOpinion: Summarization and Exploration of Opinion from E-Commerce Websites. Sensors, 21(2), 636. https://doi.org/10.3390/s21020636