[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1453101.1453146acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

What makes a good bug report?

Published: 09 November 2008 Publication History

Abstract

In software development, bug reports provide crucial information to developers. However, these reports widely differ in their quality. We conducted a survey among developers and users of APACHE, ECLIPSE, and MOZILLA to find out what makes a good bug report.
The analysis of the 466 responses revealed an information mismatch between what developers need and what users supply. Most developers consider steps to reproduce, stack traces, and test cases as helpful, which are at the same time most difficult to provide for users. Such insight is helpful to design new bug tracking tools that guide users at collecting and providing more helpful information.
Our CUEZILLA prototype is such a tool and measures the quality of new bug reports; it also recommends which elements should be added to improve the quality. We trained CUEZILLA on a sample of 289 bug reports, rated by developers as part of the survey. In our experiments, CUEZILLA was able to predict the quality of 31--48% of bug reports accurately.

References

[1]
G. Antoniol, H. Gall, M. D. Penta, and M. Pinzger. Mozilla: Closing the circle. Technical Report TUV-1841-2004-05, Technical University of Vienna, 2004.
[2]
J. Anvik, L. Hiew, and G. C. Murphy. Who should fix this bug? In ICSE '06: Proceedings of the 28th International Conference on Software Engineering, pages 361--370, 2006.
[3]
V. R. Basili, F. Shull, and F. Lanubile. Building knowledge through families of experiments. IEEE Trans. Software Eng., 25(4):456--473, 1999.
[4]
N. Bettenburg, S. Just, A. Schröter, C. Weiss, R. Premraj, and T. Zimmermann. Quality of bug reports in Eclipse. In Proceedings of the 2007 OOPSLA Workshop on Eclipse Technology eXchange (ETX), pages 21--25, October 2007.
[5]
N. Bettenburg, S. Just, A. Schröter, C. Weiss, R. Premraj, and T. Zimmermann. What makes a good bug report? Version 1.1. Technical report, Saarland University, Software Engineering Chair, March 2008. The technical report is an extended version of this paper. http://www.st.cs.uni-sb.de/publications/details/bettenburg-tr-2008/.
[6]
N. Bettenburg, R. Premraj, T. Zimmermann, and S. Kim. Duplicate bug reports considered harmful. really? In ICSM '08: Proceedings of the 24th IEEE International Conference on Software Maintenance, September 2008. To appear.
[7]
N. Bettenburg, R. Premraj, T. Zimmermann, and S. Kim. Extracting structural information from bug reports. In Proceedings of the Fifth International Working Conference on Mining Software Repositories, May 2008.
[8]
G. Canfora and L. Cerulo. Fine grained indexing of software repositories to support impact analysis. In MSR '06: Proceedings of the International Workshop on Mining Software Repositories, pages 105--111, 2006.
[9]
G. Canfora and L. Cerulo. Supporting change request assignment in open source development. In SAC '06: Proceedings of the 2006 ACM Symposium on Applied Computing, pages 1767--1772, 2006.
[10]
L. Cherry and W. Vesterman. Writing tools - the STYLE and DICTION programs. Technical report, AT&T Laboratories, 1980.
[11]
D. Cubranic and G. C. Murphy. Automatic bug triage using text categorization. In SEKE 2004: Proceedings of the Sixteenth International Conference on Software Engineering & Knowledge Engineering, pages 92--97, 2004.
[12]
M. Fischer, M. Pinzger, and H. Gall. Analyzing and relating bug report data for feature tracking. In Proceedings of the 10th Working Conference on Reverse Engineering (WCRE), pages 90--101, 2003.
[13]
E. Goldberg. Bug writing guidelines. https://bugs.eclipse.org/bugs/bugwritinghelp.html. Last accessed 2007-08-04.
[14]
P. Hooimeijer and W. Weimer. Modeling bug report quality. In ASE '07: Proceedings of the twenty-second IEEE/ACM International Conference on Automated Software Engineering, pages 34--43, 2007.
[15]
HOT or NOT. http://www.hotornot.com/. Last accessed 2007-09-11.
[16]
S. Joshi and A. Orso. SCARPE: A Technique and Tool for Selective Record and Replay of Program Executions. In Proceedings of the 23rd IEEE International Conference on Software Maintenance (ICSM 2007), Paris, France, October 2007.
[17]
S. Just, R. Premraj, and T. Zimmermann. Towards the next generation of bug tracking systems. In VL/HCC '08: Proceedings of the 2008 IEEE Symposium on Visual Languages and Human-Centric Computing, September 2008. To appear.
[18]
M. Kersten and G. C. Murphy. Using task context to improve programmer productivity. In Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2006), pages 1--11, 2006.
[19]
J. P. Kincaid, R. P. Fishburne, Jr., R. L. Rogers, and B. S. Chissom. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Research Branch Report 8--75, Millington, TN: Naval Technical Training, U. S. Naval Air Station, Memphis, TN, 1975.
[20]
A. J. Ko, B. A. Myers, and D. H. Chau. A linguistic analysis of how people describe software problems. In Proceedings of the 2006 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2006), pages 127--134, 2006.
[21]
B. Liblit, M. Naik, A. X. Zheng, A. Aiken, and M. I. Jordan. Scalable statistical bug isolation. In PLDI '05: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 15--26, 2005.
[22]
R. Likert. A technique for the measurement of attitudes. Archives of Psychology, 140:1--55, 1932.
[23]
D. MacKenzie, P. Eggert, and R. Stallman. Comparing and Merging Files with GNU Diff and Patch. Network Theory Ltd., 2003.
[24]
L. Moonen. Generating robust parsers using island grammars. In Proceedings of the Eighth Working Conference on Reverse Engineering (WCRE), pages 13--22, 2001.
[25]
A. Orso, S. Joshi, M. Burger, and A. Zeller. Isolating relevant component interactions with JINSI. In Proc. of Fifth International Workshop on Dynamic Analysis (WODA 2007), May 2006.
[26]
L. D. Panjer. Predicting Eclipse bug lifetimes. In MSR '07: Proceedings of the Fourth International Workshop on Mining Software Repositories, 2007. MSR Challenge Contribution.
[27]
U. Passing and M. J. Shepperd. An experiment on software project size and effort estimation. In Proc. of International Symposium on Empirical Software Engineering (ISESE '03), pages 120--131, 2003.
[28]
T. Punter, M. Ciolkowski, B. Freimut, and I. John. Conducting on-line surveys in software engineering. In Proc. of International Symposium on Empirical Software Engineering (ISESE '03), pages 80--88, 2003.
[29]
Ratemyface.com. http://www.ratemyface.com/. Last accessed 2007-09-11.
[30]
P. Runeson, M. Alexandersson, and O. Nyholm. Detection of duplicate defect reports using natural language processing. In ICSE '07: Proceedings of the 29th International Conference on Software Engineering, pages 499--510, 2007.
[31]
S. Siegel and N. J. Castellan, Jr. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill, second edition, 1988.
[32]
J. Singer and N. G. Vinson. Ethical issues in empirical studies of software engineering. IEEE Trans. Software Eng., 28(12):1171--1180, 2002.
[33]
J. Spolsky. Joel on Software. APress, US, 2004.
[34]
X. Wang, L. Zhang, T. Xie, J. Anvik, and J. Sun. An approach to detecting duplicate bug reports using natural language and execution information. In ICSE '08: Proceedings of the 30th International Conference on Software Engineering, May 2008.
[35]
L. Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer, second edition, 2004.
[36]
W. Weimer. Patches as better bug reports. In GPCE '06: Proceedings of the 5th international conference on Generative programming and component engineering, pages 181--190, 2006.
[37]
C. Weiss, R. Premraj, T. Zimmermann, and A. Zeller. How long will it take to fix this bug? In MSR '07: Proceedings of the Fourth International Workshop on Mining Software Repositories, 2007.
[38]
G. Xu, A. Rountev, Y. Tang, and F. Qin. Efficient checkpointing of java software using context-sensitive capture and replay. In ESEC-FSE '07: Proceedings of the European Software Engineering Conference and ACM SIGSOFT Symposium on Foundations of Software Engineering, pages 85--94, 2007.

Cited By

View all
  • (2024)Demystifying the Fight Against Complexity: A Comprehensive Study of Live Debugging Activities in Production Cloud SystemsProceedings of the 2024 ACM Symposium on Cloud Computing10.1145/3698038.3698568(341-360)Online publication date: 20-Nov-2024
  • (2024)ChatBR: Automated assessment and improvement of bug report quality using ChatGPTProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695518(1472-1483)Online publication date: 27-Oct-2024
  • (2024)Inside Bug Report Templates: An Empirical Study on Bug Report Templates in Open-Source SoftwareProceedings of the 15th Asia-Pacific Symposium on Internetware10.1145/3671016.3671401(125-134)Online publication date: 24-Jul-2024
  • Show More Cited By

Recommendations

Reviews

Boniface C Nwugwo

Bettenburg et al. surveyed software "developers and users of three large open-source initiatives," including Apache, Eclipse, and Mozilla. The real motivation for the study is to present the authors' potential bug reporting tool, Cuezilla, although the title does not reflect this fact. While the abstract provides some information about the key features of the study, it does not provide all of it. The paper does not include a statement of the research questions and hypotheses that served to guide the study. Rather, the title appears to be the research question. The dependent and independent variables, and the expected relationships between them, are not clearly stated. The paper talks about the participants and how they were selected. It is not clear whether the "reporters" did their bug reporting from the development, test, or end-user environment. There is a big difference between bugs reported in the test or customer environment and those reported in the development environment. It is often difficult to recreate in the development environment a problem found in the test or customer environment, as the development environment settings and code base are often different from the test and customer environments. Therefore, asking the study participants "which three items were most difficult to provide" will very much depend on the environment. Some items are easier to provide if the reporters are testing in the development or test environment, as opposed to the customer environment. In addition, the procedure, which is arguably the most important component of the design method, and perhaps the easiest to describe, is missing. The authors fail to list, in chronological order, the steps they took to develop, administer, and evaluate the study-steps that could also serve as a guide for replicating the study. The authors do not state any hypotheses or any clear research questions of interest. By not stating clearly what questions they wanted answered, any finding, significant or not, becomes less meaningful. This is not to say that the study is without merit. For one thing, the research touches on some issues very dear to software developers-that is, what information do developers prefer to see in a bug report__?__ While there are no groundbreaking findings in this study, Bettenburg et al. confirm what happens when there is no standardized way of reporting bugs for a project: lots of mismatched information from the reporters' and developers' points of view. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGSOFT '08/FSE-16: Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering
November 2008
369 pages
ISBN:9781595939951
DOI:10.1145/1453101
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 November 2008

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Conference

SIGSOFT '08/FSE-16
Sponsor:

Acceptance Rates

Overall Acceptance Rate 17 of 128 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)191
  • Downloads (Last 6 weeks)30
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Demystifying the Fight Against Complexity: A Comprehensive Study of Live Debugging Activities in Production Cloud SystemsProceedings of the 2024 ACM Symposium on Cloud Computing10.1145/3698038.3698568(341-360)Online publication date: 20-Nov-2024
  • (2024)ChatBR: Automated assessment and improvement of bug report quality using ChatGPTProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695518(1472-1483)Online publication date: 27-Oct-2024
  • (2024)Inside Bug Report Templates: An Empirical Study on Bug Report Templates in Open-Source SoftwareProceedings of the 15th Asia-Pacific Symposium on Internetware10.1145/3671016.3671401(125-134)Online publication date: 24-Jul-2024
  • (2024)Automating Issue Reporting in Software Testing: Lessons Learned from Using the Template Generator ToolCompanion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering10.1145/3663529.3663847(278-282)Online publication date: 10-Jul-2024
  • (2024)Unraveling the Influences on Bug Fixing Time: A Comparative Analysis of Causal Inference ModelProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661186(393-398)Online publication date: 18-Jun-2024
  • (2024)How to Gain Commit Rights in Modern Top Open Source Communities?Proceedings of the ACM on Software Engineering10.1145/36607841:FSE(1727-1749)Online publication date: 12-Jul-2024
  • (2024)Feedback-Driven Automated Whole Bug Report Reproduction for Android AppsProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3680341(1048-1060)Online publication date: 11-Sep-2024
  • (2024)GIRT-Model: Automated Generation of Issue Report TemplatesProceedings of the 21st International Conference on Mining Software Repositories10.1145/3643991.3644906(407-418)Online publication date: 15-Apr-2024
  • (2024)Toward Rapid Bug Resolution for Android AppsProceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings10.1145/3639478.3639812(237-241)Online publication date: 14-Apr-2024
  • (2024)The Impact Of Bug Localization Based on Crash Report Mining: A Developers' PerspectiveProceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice10.1145/3639477.3639730(13-24)Online publication date: 14-Apr-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media