[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/3155562.3155619guideproceedingsArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
Article
Free access

Understanding feature requests by leveraging fuzzy method and linguistic analysis

Published: 30 October 2017 Publication History

Abstract

In open software development environment, a large number of feature requests with mixed quality are often posted by stakeholders and usually managed in issue tracking systems. Thoroughly understanding and analyzing the real intents that feature requests imply is a labor-intensive and challenging task. In this paper, we introduce an approach to understand feature requests automatically. We generate a set of fuzzy rules based on natural language processing techniques that classify each sentence in feature requests into a set of categories: Intent, Explanation, Benefit, Drawback, Example and Trivia. Consequently, the feature requests can be automatically structured based on the classification results. We conduct experiments on 2,112 sentences taken from 602 feature requests of nine popular open source projects. The results show that our method can reach a high performance on classifying sentences from feature requests. Moreover, when applying fuzzy rules on machine learning methods, the performance can be improved significantly.

References

[1]
Bugzilla, “Bugzilla, a Bug-Tracking System.” {Online}. Available: https://www.bugzilla.org
[2]
JIRA, “JIRA: plan, track, and release software.” {Online}. Available: https://www.atlassian.com/software/jira
[3]
Github, “Github, a Web-based Git or Version Control Repository and Internet Hosting Service.” {Online}. Available: https://github.com/
[4]
X. Franch and G. Ruhe, “Software release planning,” in Proceedings of the 38th International Conference on Software Engineering, ICSE 2016, Austin, TX, USA, May 14-22, 2016 - Companion Volume, 2016, pp. 894–895.
[5]
Y. Shin, J. H. Hayes, and J. Cleland-Huang, “Guidelines for benchmarking automated software traceability techniques,” in 8th IEEE/ACM International Symposium on Software and Systems Traceability, SST 2015, Florence, Italy, May 17, 2015, 2015, pp. 61–67.
[6]
P. Mäder, R. Oliveto, and A. Marcus, “Empirical studies in software and systems traceability,” Empirical Software Engineering, vol. 22, no. 3, pp. 963–966, 2017.
[7]
G. Uddin and M. P. Robillard, “How API documentation fails,” IEEE Software, vol. 32, no. 4, pp. 68–75, 2015.
[8]
L. Shi, H. Zhong, T. Xie, and M. Li, “An empirical study on evolution of API documentation,” in Proc. International Conference on Fundamental Approaches to Software Engineering (FASE 2011), March-April 2011, pp. 416–431.
[9]
K. Herzig, S. Just, and A. Zeller, “It’s not a Bug, it’s a Feature: How Misclassification Impacts Bug Prediction,” in Proceedings of the 2013 International Conference on Software Engineering, 2013, pp. 392–401.
[10]
A. N. Meyer, L. E. Barton, G. C. Murphy, T. Zimmermann, and T. Fritz, “The work life of developers: Activities, switches and perceived productivity,” IEEE Transactions on Software Engineering, 2017.
[11]
A. J. Ko, B. A. Myers, and D. H. Chau, “A Linguistic Analysis of How People Describe Software Problems,” in Visual Languages and Human-Centric Computing, 2006. VL/HCC 2006. IEEE Symposium on, 2006, pp. 127–134.
[12]
N. Bettenburg, R. Premraj, T. Zimmermann, and S. Kim, “Extracting Structural Information from Bug Reports,” in Proceedings of the 2008 International Working Conference on Mining Software Repositories, 2008, pp. 27–30.
[13]
N. Bettenburg, S. Just, A. Schröter, C. Weiss, R. Premraj, and T. Zimmermann, “What Makes a Good Bug Report?” in Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2008, pp. 308–318.
[14]
G. Trinder, “How to Write a Feature Request,” 2006 (accessed May 5, 2017). {Online}. Available: https://blogs.msdn.microsoft.com/tdragger/ 2006/01/24/how-to-write-a-feature-request/
[15]
Werner, “How do I Write a Good Feature Request?” 2015 (accessed May 5, 2017). {Online}. Available: https://meta.stackexchange.com/ questions/7656/how-do-i-write-a-good-answer-to-a-question
[16]
T. Winograd and F. Flores, Understanding Computers and Cognition: A New Foundation for Design. Intellect Books, 1986.
[17]
M.-C. De Marneffe and C. D. Manning, “The Stanford Typed Dependencies Representation,” in Coling 2008: proceedings of the workshop on cross-framework and cross-domain parser evaluation, 2008, pp. 1–8.
[18]
M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini, “Building a Large Annotated Corpus of English: The Penn Treebank,” Computational linguistics, vol. 19, no. 2, pp. 313–330, 1993.
[19]
E. Hllermeier, “Fuzzy Methods in Machine Learning and Data Mining: Status and Prospects,” Fuzzy Sets and Systems, vol. 156, no. 3, pp. 387 – 406, 2005.
[20]
R. Agrawal, T. Imieli´nski, and A. Swami, “Mining Association Rules Between Sets of Items in Large Databases,” in Acm sigmod record, vol. 22, no. 2, 1993, pp. 207–216.
[21]
C. D. Manning, P. Raghavan, and H. Schtze, “An Introduction to Information Retrieval,” Journal of the American Society for Information Science & Technology, vol. 43, no. 3, pp. 824–825, 2008.
[22]
S. V. Stehman, “”Selecting and Interpreting Measures of Thematic Classification Accuracy”,” Remote Sensing of Environment, vol. 62, no. 1, pp. 77 – 89, 1997.
[23]
N. Bettenburg, R. Premraj, T. Zimmermann, and S. Kim, “Extracting structural information from bug reports,” in Proceedings of the 2008 international working conference on Mining software repositories. ACM, 2008, pp. 27–30.
[24]
F. Sebastiani, “Machine Learning in Automated Text Categorization,” ACM computing surveys (CSUR), vol. 34, no. 1, pp. 1–47, 2002.
[25]
I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, 2016.
[26]
G. Salton and C. Buckley, “Term-weighting Approaches in Automatic Text Retrieval,” Information processing & management, vol. 24, no. 5, pp. 513–523, 1988.
[27]
A. Di Sorbo, S. Panichella, C. V. Alexandru, J. Shimagaki, C. A. Visaggio, G. Canfora, and H. C. Gall, “What Would Users Change in My App? Summarizing App Reviews for Recommending Software Changes,” in Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2016, pp. 499–510.
[28]
J. W. Raymond, E. J. Gardiner, and P. Willett, “Rascal: Calculation of Graph Similarity using Maximum Common Edge Subgraphs,” The Computer Journal, vol. 45, no. 6, pp. 631–644, 2002.
[29]
C. Arora, M. Sabetzadeh, L. Briand, and F. Zimmer, “Automated Extraction and Clustering of Requirements Glossary Terms,” IEEE Transactions on Software Engineering, vol. PP, no. 99, pp. 1–1, 2016.
[30]
B. Boehm, C. Chen, K. Srisopha, and L. Shi, “The Key Roles of Maintainability in an Ontology for System Qualities,” in INCOSE International Symposium, vol. 26, no. 1, 2016, pp. 2026–2040.
[31]
A. D. Lucia, M. D. Penta, R. Oliveto, A. Panichella, and S. Panichella, “Using IR methods for Labeling Source Code Artifacts: Is it Worthwhile?” in 2012 20th IEEE International Conference on Program Comprehension (ICPC), 2012, pp. 193–202.
[32]
J. Cleland-Huang, H. Dumitru, C. Duan, and C. Castro-Herrera, “Automated Support for Managing Feature Requests in Open Forums,” Communications of the ACM, vol. 52, no. 10, pp. 68–74, 2009.
[33]
M. Rahimi and J. Cleland-Huang, “Personas in the Middle: Automated Support for Creating Personas as Focal Points in Feature Gathering Forums,” in Proceedings of the 29th ACM/IEEE international conference on Automated software engineering, 2014, pp. 479–484.
[34]
K. D. Gill, A. Raza, A. M. Zaidi, and M. M. Kiani, “Semi-automation for Ambiguity Resolution in Open Source Software Requirements,” in Electrical and Computer Engineering (CCECE), 2014 IEEE 27th Canadian Conference on, 2014, pp. 1–6.
[35]
L. Shi, C. Chen, Q. Wang, and B. Boehm, “Is It a New Feature or Simply Don’t Know Yet? On Automated Redundant OSS Feature Requests Identification,” in Requirements Engineering Conference (RE), 2016 IEEE 24th International, 2016, pp. 377–382.
[36]
F. Thung, S. Wang, D. Lo, and J. Lawall, “Automatic Recommendation of API Methods from Feature Requests,” in Automated Software Engineering (ASE), 2013 IEEE/ACM 28th International Conference on, 2013, pp. 290–300.
[37]
A. Di Sorbo, S. Panichella, C. A. Visaggio, M. Di Penta, G. Canfora, and H. C. Gall, “Development Emails Content Analyzer: Intention Mining in Developer Discussions (T),” in Automated Software Engineering (ASE), 2015 30th IEEE/ACM International Conference on, 2015, pp. 12–23.
[38]
T. Merten, M. Falis, P. Hübner, T. Quirchmayr, S. Bürsner, and B. Paech, “Software Feature Request Detection in Issue Tracking Systems,” in Requirements Engineering Conference (RE), 2016 IEEE 24th International, 2016, pp. 166–175.
[39]
S. Panichella, A. D. Sorbo, E. Guzman, C. A. Visaggio, G. Canfora, and H. C. Gall, “How Can I Improve my App? Classifying User Reviews for Software Maintenance and Evolution,” in 2015 IEEE International Conference on Software Maintenance and Evolution, ICSME 2015, Bremen, Germany, September 29 - October 1, 2015, 2015, pp. 281– 290.
[40]
W. Maalej and H. Nabil, “Bug report, Feature Request, or simply Praise? on Automatically Classifying app Reviews,” in 2015 IEEE 23rd international requirements engineering conference (RE), 2015, pp. 116– 125.
[41]
L. Villarroel, G. Bavota, B. Russo, R. Oliveto, and M. Di Penta, “Release Planning of Mobile Apps based on User Reviews,” in Proceedings of the 38th International Conference on Software Engineering, 2016, pp. 14–24.
[42]
F. Palomba, M. L. Vásquez, G. Bavota, R. Oliveto, M. D. Penta, D. Poshyvanyk, and A. D. Lucia, “User Reviews Matter! Tracking Crowdsourced Reviews to Support Evolution of Successful Apps,” in 2015 IEEE International Conference on Software Maintenance and Evolution, ICSME 2015, Bremen, Germany, September 29 - October 1, 2015, 2015, pp. 291–300.
[43]
P. S. Kochhar, F. Thung, and D. Lo, “Automatic fine-grained issue report reclassification,” in Engineering of Complex Computer Systems (ICECCS), 2014 19th International Conference on. IEEE, 2014, pp. 126–135.
[44]
S. Panichella, A. Di Sorbo, E. Guzman, C. A. Visaggio, G. Canfora, and H. C. Gall, “Ardoc: App reviews development oriented classifier,” in Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 2016, pp. 1023–1027.

Cited By

View all
  • (2024)A Platform-Agnostic Framework for Automatically Identifying Performance Issue Reports With Heuristic Linguistic PatternsIEEE Transactions on Software Engineering10.1109/TSE.2024.339062350:7(1704-1725)Online publication date: 1-Jul-2024
  • (2024)Automated Smell Detection and Recommendation in Natural Language RequirementsIEEE Transactions on Software Engineering10.1109/TSE.2024.336103350:4(695-720)Online publication date: 1-Feb-2024
  • (2020)Automatically identifying performance issue reports with heuristic linguistic patternsProceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3409674(964-975)Online publication date: 8-Nov-2020

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
ASE '17: Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering
October 2017
1033 pages
ISBN:9781538626849

Sponsors

Publisher

IEEE Press

Publication History

Published: 30 October 2017

Author Tag

  1. feature request

Qualifiers

  • Article

Acceptance Rates

Overall Acceptance Rate 82 of 337 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)63
  • Downloads (Last 6 weeks)4
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A Platform-Agnostic Framework for Automatically Identifying Performance Issue Reports With Heuristic Linguistic PatternsIEEE Transactions on Software Engineering10.1109/TSE.2024.339062350:7(1704-1725)Online publication date: 1-Jul-2024
  • (2024)Automated Smell Detection and Recommendation in Natural Language RequirementsIEEE Transactions on Software Engineering10.1109/TSE.2024.336103350:4(695-720)Online publication date: 1-Feb-2024
  • (2020)Automatically identifying performance issue reports with heuristic linguistic patternsProceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3409674(964-975)Online publication date: 8-Nov-2020

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media