[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3030207.3030213acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects

Published: 17 April 2017 Publication History

Abstract

The usage of open source (OS) software is wide-spread across many industries. While the functional quality of OS projects is considered to be similar to closed-source software, much is unknown about the quality in terms of performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing. To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) developers, (2) size, (3) test organization, (4) types of performance tests and (5) used tooling. We show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Further, even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods or performance test generation, as well as focus on a tight integration with standard continuous integration tooling.

References

[1]
M. Aberdour. Achieving quality in open-source software. IEEE Software, 24(1):58--64, Jan 2007.
[2]
J. P. S. Alcocer and A. Bergel. Tracking down performance variation against source code evolution. In Proceedings of the 11th Symposium on Dynamic Languages (DLS), pages 129--139. ACM, 2015.
[3]
A. Avritzer, J. Kondek, D. Liu, and E. J. Weyuker. Software performance testing based on workload characterization. In Proceedings of the 3rd International Workshop on Software and Performance (WOSP), pages 17--24. ACM, 2002.
[4]
S. Baltes, O. Moseler, F. Beck, and S. Diehl. Navigate, understand, communicate: How developers locate performance bugs. In 2015 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pages 1--10, Oct 2015.
[5]
M. Beller, G. Gousios, A. Panichella, and A. Zaidman. When, how, and why developers (do not) test in their IDEs. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pages 179--190. ACM, 2015.
[6]
S. M. Blackburn, R. Garner, C. Hoffmann, A. M. Khang, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanović, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In Proceedings of the 21st ACM SIGPLAN Conference on Object-oriented Programming Systems, Languages, and Applications (OOPSLA), pages 169--190. ACM, 2006.
[7]
J. Cito, P. Leitner, T. Fritz, and H. C. Gall. The making of cloud applications: An empirical study on software development for the cloud. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pages 393--403. ACM, 2015.
[8]
A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous java performance evaluation. In Proceedings of the 22nd ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications (OOPSLA), pages 57--76. ACM, 2007.
[9]
X. Han and T. Yu. An empirical study on performance bugs for highly configurable software systems. In Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pages 23:1--23:10, 2016.
[10]
C. Heger, J. Happe, and R. Farahbod. Automated Root Cause Isolation of Performance Regressions During Software Development. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 27--38, 2013.
[11]
I. Herraiz and A. E. Hassan. Beyond lines of code: Do we need more complexity metrics? Making software: what really works, and why we believe it, pages 125--141, 2010.
[12]
V. Horký, P. Libič, L. Marek, A. Steinhauser, and P. Tůma. Utilizing performance unit tests to increase performance awareness. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 289--300. ACM, 2015.
[13]
P. Huang, X. Ma, D. Shen, and Y. Zhou. Performance regression testing target prioritization via performance risk analysis. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 60--71. ACM, 2014.
[14]
Z. M. Jiang and A. E. Hassan. A survey on load testing of large-scale software systems. IEEE Transactions on Software Engineering (TSE), 41(11):1091--1118, 2015.
[15]
G. Jin, L. Song, X. Shi, J. Scherpelz, and S. Lu. Understanding and detecting real-world performance bugs. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 77--88. ACM, 2012.
[16]
J. D. Long, D. Feng, and N. Cliff. Ordinal analysis of behavioral data. Handbook of psychology, 2003.
[17]
Q. Luo, D. Poshyvanyk, and M. Grechanik. Mining performance regression inducing code changes in evolving software. In Proceedings of the 13th International Conference on Mining Software Repositories (MSR), pages 25--36. ACM, 2016.
[18]
A. Mockus, R. T. Fielding, and J. Herbsleb. A case study of open source software development: The apache server. In Proceedings of the 22nd International Conference on Software Engineering (ICSE), pages 263--272. ACM, 2000.
[19]
R. Pooley. Software engineering and performance: A roadmap. In Proceedings of the Conference on The Future of Software Engineering (ICSE), pages 189--199. ACM, 2000.
[20]
J. Romano, J. D. Kromrey, J. Coraggio, J. Skowronek, and L. Devine. Exploring methods for evaluating group differences on the NSSE and other surveys: Are the t-test and Cohen's d indices the most appropriate choices. In Annual meeting of the Southern Association for Institutional Research, 2006.
[21]
J. P. Sandoval Alcocer, A. Bergel, and M. T. Valente. Learning from source code history to identify performance failures. In Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering (ICPE), pages 37--48, 2016.
[22]
C. U. Smith and L. G. Williams. Software performance engineering: A case study including performance comparison with design alternatives. IEEE Transactions on Software Engineering (TSE), 19(7):720--741, July 1993.
[23]
M. D. Syer, Z. M. Jiang, M. Nagappan, A. E. Hassan, M. Nasser, and P. Flora. Continuous validation of load test suites. In Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 259--270. ACM, 2014.
[24]
M. L. Vasquez, C. Vendome, Q. Luo, and D. Poshyvanyk. How developers detect and fix performance bottlenecks in Android apps. In R. Koschke, J. Krinke, and M. P. Robillard, editors, Proceedings of the International Conference on Software Maintenance and Evolution (ICSME), pages 352--361. IEEE Computer Society, 2015.
[25]
E. J. Weyuker and F. I. Vokolos. Experience with performance testing of software systems: Issues, an approach, and case study. IEEE Transactions on Software Engineering (TSE), 26(12):1147--1156, 2000.
[26]
S. Zaman, B. Adams, and A. E. Hassan. A Qualitative Study on Performance Bugs. In Proceedings of the 9th IEEE Working Conference on Mining Software Repositories (MSR), pages 199--208. IEEE Press, 2012.
[27]
F. Zhang, A. E. Hassan, S. McIntosh, and Y. Zou. The use of summation to aggregate software metrics hinders the performance of defect prediction models. IEEE Transactions on Software Engineering (TSE), (1):1--16, 2016.

Cited By

View all
  • (2024)AI-driven Java Performance Testing: Balancing Result Quality with Testing TimeProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695017(443-454)Online publication date: 27-Oct-2024
  • (2024)An Empirical Study on Code Coverage of Performance TestingProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661196(48-57)Online publication date: 18-Jun-2024
  • (2024)Evaluating Search-Based Software Microbenchmark PrioritizationIEEE Transactions on Software Engineering10.1109/TSE.2024.338083650:7(1687-1703)Online publication date: 1-Jul-2024
  • Show More Cited By

Index Terms

  1. An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICPE '17: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering
    April 2017
    450 pages
    ISBN:9781450344043
    DOI:10.1145/3030207
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 April 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. empirical software engineering
    2. mining software repositories
    3. open source
    4. performance engineering
    5. performance testing

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ICPE '17
    Sponsor:

    Acceptance Rates

    ICPE '17 Paper Acceptance Rate 27 of 83 submissions, 33%;
    Overall Acceptance Rate 252 of 851 submissions, 30%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)31
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 13 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)AI-driven Java Performance Testing: Balancing Result Quality with Testing TimeProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695017(443-454)Online publication date: 27-Oct-2024
    • (2024)An Empirical Study on Code Coverage of Performance TestingProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661196(48-57)Online publication date: 18-Jun-2024
    • (2024)Evaluating Search-Based Software Microbenchmark PrioritizationIEEE Transactions on Software Engineering10.1109/TSE.2024.338083650:7(1687-1703)Online publication date: 1-Jul-2024
    • (2023)GraalVM Compiler Benchmark Results Dataset (Data Artifact)Companion of the 2023 ACM/SPEC International Conference on Performance Engineering10.1145/3578245.3585025(65-69)Online publication date: 15-Apr-2023
    • (2023)Software Mining -- Investigating Correlation between Source Code Features and Michrobenchmark's Steady StateCompanion of the 2023 ACM/SPEC International Conference on Performance Engineering10.1145/3578245.3584695(107-111)Online publication date: 15-Apr-2023
    • (2023)A Study of Java Microbenchmark Tail LatenciesCompanion of the 2023 ACM/SPEC International Conference on Performance Engineering10.1145/3578245.3584690(77-81)Online publication date: 15-Apr-2023
    • (2023)ICG: A Machine Learning Benchmark Dataset and Baselines for Inline Code Comments Generation TaskInternational Journal of Software Engineering and Knowledge Engineering10.1142/S021819402350054734:02(331-356)Online publication date: 20-Oct-2023
    • (2023)A Taxonomy of Testable HTML5 Canvas IssuesIEEE Transactions on Software Engineering10.1109/TSE.2023.3270740(1-13)Online publication date: 2023
    • (2023)Automated Detection of Software Performance Antipatterns in Java-Based ApplicationsIEEE Transactions on Software Engineering10.1109/TSE.2023.323432149:4(2873-2891)Online publication date: 1-Apr-2023
    • (2023)Towards the Analysis and Completion of Syntactic Structure Ellipsis for Inline CommentsIEEE Transactions on Software Engineering10.1109/TSE.2022.321627949:4(2285-2302)Online publication date: 1-Apr-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media